In this scenario I would first check pod logs with `kubectl logs --previous <pod>` to understand what error caused the crash. I’d then inspect the new version’s readiness and liveness probe configurations and compare with the last working version. While investigating I would immediately initiate a rollback using `kubectl rollout undo deployment/<service>`, or switch traffic to a stable replica set or version if available. Communication is critical: I’d notify stakeholders of the issue, the rollback action, expected downtime and recovery timeline. Next I would analyse root cause: maybe a dependency changed, environment variable missing, or database schema mismatch. I’d capture metrics, alert teams, fix the issue in a development environment, test it, then re-deploy with monitoring. After resolution I’d coordinate a post-mortem with stakeholders summarising what happened, impact, fix and preventive actions. This demonstrates production-grade discipline. :contentReference[oaicite:1]{index=1}