You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 27, 2022. It is now read-only.
Note that if you just want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the unready state while it waits for the Containers in the Pod to stop.
@florianmutter Did you get any feedback on this issue?
It seems to me that it all depends on whether the service can still serve requests during the shutdown. The pod may automatically put itself into an unready state if the pod itself is deleted, but the shutdown can be initiated not only by Kubernetes but from the application itself - like getting a sigterm for some other reason or even a self-initiated or a manual shutdown.
In any case, if it is able to serve the readiness requests but not its core service endpoints for some reason then 503 seems reasonable if the service is indeed unavailable. But I'd like to know how this project maintainers view it. Did you get any reply on that?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
According to https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-liveness-or-readiness-probes it is not necessary to send 503 on shutdown.
See also https://freecontent.manning.com/handling-client-requests-properly-with-kubernetes/
I think it should be avoided to have more meaning full event logs of the pods. Otherwise each shutdown generates logs of failed readiness requests
The text was updated successfully, but these errors were encountered: