-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle Kubernetes API server failover #3008
Comments
I just experienced the same issue. The dashboard was trying to synchronize in a fast loop (thousands of log entries in one second) consuming a lot of CPU. |
Even with a single-master cluster this happens. Steps to reproduce:
The logging flood stops when the dashboard pod is deleted/recreated. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/lifecycle frozen |
Same problem. Running Azure AKS After "Microsoft" sometimes restarts the managed API-server the dashboard starts to log around 450 lines per 5 minutes. Any update on actually getting the reconnect solved? |
@zenlil it will be fixed in v2. Right now a workaround is to delete Dashboard pod after api server restart. |
Still happening in 1.10.1 ? I was flodded with gigs of logs in no time. Deleting the dashboard pod did not solve it |
Can you upload beginning of the log? First 30m let's say. |
These are the initial log entries that we saw when we encountered the issue:
The last 3 errors repeat every 2 seconds causing a flood of log entries. |
It's no longer a case with v2 as it forces restart of the pod after few retries. |
Environment
Steps to reproduce
Observed result
Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
(see Extremely dangerous logging #2723 (comment))Expected result
/api/v1/settings/global
might do, but ideally a designated health URI)The text was updated successfully, but these errors were encountered: