-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Consolidation tolerance #795
Comments
We've discussed the idea of an "improvement threshold" https://github.com/aws/karpenter-core/pull/768/files#diff-e6f78172a1d86c735a03ec76853021c670f4203f387c45b601670eca0e2ae1a4R26, which may model this quite nicely. Thoughts? |
That does seem like what I'm looking for! The design doc appears primarily focused on a spot issue I'm not too familiar with, but
👍 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale Since it's still not totally clear what direction the project is going in with regard to this problem |
@stevenpitts What is your current strategy to mitigate this problem? Have you tried creating a custom PriorityClass with a higher priority for critical workloads? This might help in a scenario where karpenter decides to delete a few nodes. I haven't used karpenter myself so might be a dumb question. |
@sumeet-baghel Hello stranger! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Anyone interested in picking up "PriceImprovementThreshold"? |
@ellistarn I think that from the RFC it's unclear what the maintainers think the solution should look like. Is there a more specific doc I should read about it? Or are you still looking for feedback/opinions on the RFC? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Description
What problem are you trying to solve?
I am trying to reduce the frequency of consolidation on clusters that have frequent but insignificant resource request changes.
An active cluster can cause frequent consolidation events.
For example, if a deploy with HPA scales up and down one replica every 10 minutes, it's very likely that a new node will be spun up and then spun down every 10 minutes, such that cost is optimized. This could even result in a packed node getting deleted, if Karpenter decides that a different node type or number of nodes would be more cost efficient.
That can be really disruptive. PDBs help, but in order for them to guard against users experiencing slowness you'd need to set a PDB of practically 1% maxUnavailable.
Once a
consolidationPolicy
ofWhenUnderutilized
works alongsideconsolidateAfter
, that will help out greatly, but it would still result in consolidation likely happening every (for example) 2 hours, even with very low net resource changes.I think a way of configuring "consolidation tolerance" would help here. One implementation could be a way of specifying cost tolerance.
In pseudo-configuration, there could be a
consolidationCostTolerance
field that I might set as "$50 per hour".If an HPA decides a deploy needs a new replica and there's no space, it would spin up a new combination of nodes that has enough space for all desired pods but is still cost effective. Later on, the HPA might decrement desired replicas. Karpenter would normally want to consolidate now since there's now a more cost effective combination of nodes for requested resources.
The idea is, consolidation would not happen unless
currentCostPerHour - consolidatedCostPerHour
is greater than $50.This way, until there is a significant amount of unused resources on nodes, consolidation would not trigger.
How important is this feature to you?
This feature is fairly important. Even when all the features described in disruption controls become stable, existing solutions only reduce the frequency of consolidation, slow down consolidation, or block consolidation during certain hours.
We could set a PDB on all deploys of 1% maxUnavailable, but that feels like a pretty extreme demand.
The text was updated successfully, but these errors were encountered: