Canary ingress return 503 (Service Unavailable) when no main pod available #13086
Labels
kind/support
Categorizes issue or PR as a support question.
needs-priority
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
What happened:
We created ingress and ingress-canary with different upstream services. Scaling deployment to 0, which is used for main ingress, cause 503 (Service Unavailable) for canary requests.
What you expected to happen:
Requests are routed to canary without error.
NGINX Ingress controller version (exec into the pod and run
/nginx-ingress-controller --version
):NGINX Ingress controller
Release: v1.11.2
Build: 46e76e5
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5
Kubernetes version (use
kubectl version
):Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1
Environment:
Cloud provider or hardware configuration: yandex cloud
OS (e.g. from /etc/os-release): Node: "Ubuntu 20.04.6 LTS", Container: "Alpine Linux v3.20"
Kernel (e.g.
uname -a
): 5.4.0-187-genericInstall tools:
How was the ingress-nginx-controller installed: helm
Current State of the controller: working fine
How to reproduce this issue:
We have two deployments and services in cluster, one is the main release, second one - canary.
Create two ingresses, main:
and canary:
both services has endpoints:
Test with grpcurl, it returns as expected:
It can be seen In ingress-controller logs that upstream pod address equals to my-app-canary endpoint (so routing works).
Then scale my-app-main deployment to 0:
kubectl scale --replicas=0 deployment/my-app-main
Canary deployment remains untouched.
Repeat grpcurl request, it returns:
Anything else we need to know:
The text was updated successfully, but these errors were encountered: