-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory when informer error occurs. #1377
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I have a custrom controller that useing infomer. This controller will list and watch more than 2000 nodes and 50000 pods, api, apimachinery, client-go all at v0.24.0.
Code sample:
Normally only 800MB of memory is needed:
But when an error occurs, the momory will be doubuled instantly, than decrease slightly, but still be higher than the memory used before the error.
Memory used after error occurs:

As I understand, when a network anomaly occurs, informer will re-pull the full configuration of above resources from kube-apiserver. At this time, because old and new resources object exist at the same time, the memory will surge, then gc will be performed after a period of time to recycle old resources object, and the memory will fall back. But I don't understand why the meomry would be more than before error occurred.
Is this a bug? How can i fix it? What should I do?
The text was updated successfully, but these errors were encountered: