Massive scaling and anomalous behaviors with the new forceful disruption method #1928
Labels
kind/bug
Categorizes issue or PR as related to a bug.
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
Description
Observed Behavior:
The releases of v1 brings a gift of a new approach regarding the node lifecycle, with graceful and forceful methods for draining and expiration.
Regarding node expiration, according to the Nodepool schema specification, the NodeTerminationGracePeriod property acts as a feature flag to enable a maximum threshold for recycling. If defined as null, the expiration controller will wait indefinitely, respecting do-not-disrupt, PDBs and so on.(Note that I do not remember reading this in the documentation).
Having said that, two erratic behaviors can be observed:
Forceful method enabled. The terminationGracePeriod property is defined as the maximum grace period threshold for draining the node's pods. When the expiration of NodeClaims begins (TTL specified in the expireAfter setting), they are marked with the annotation
karpenter.sh/nodeclaim-termination-timestamp
, indicating the maximum datetime for decommissioning, and the grace period countdown starts. The affected node workloads, regardless of PDBs and the do-not-disrupt annotation, are identified by the provisioner controller as reschedulable pods, causing the scheduler to determine whether to generate a new NodeClaim as a replacement based on the available capacity. We have use cases with extended grace periods and workloads with significant sizing but the scheduler does not consider the potential maximum grace period, provisioning replacements that might not be used until the application terminates. Additionally, there are pods nominated to be scheduled on the newly provisioned NodeClaims, blocking possible disruptions, lack of synchronization of the cluster state with the in-memory snapshot of Karpenter and extensive enqueuing of reconciliations by the provisioner, creating the perfect ingredients for a disaster. I believe it does not make sense to flip between provisioning and consolidation with resources that may not be used, leading to unnecessary costs. For example, jobs with a TTL of 5 hours that could use the entire grace period but from t0 already have an unused replacement. Aggressive consolidation budgets tend to worsen the situation, leading to more chaos.Forceful method disabled. The terminationGracePeriod property is left undefined, which generates behavior similar to previous releases of v1 where PDBs and do-not-disrupt annotations are respected, causing the controller to wait indefinitely for the expired NodeClaims workloads to be drained. There are scenarios where this behavior is desired to minimize maximum disruption. In this case, an anomalous behavior occurs similar to the one mentioned earlier, with the difference that pods that cannot be drained are identified as reschedulable pods, leading to the provisioning of new NodeClaims that will never be used. The same flipping behavior persists along with the possibility risk of massive, uncontrolled scaling.
In addition to everything already mentioned, we must also consider the entropy generated by Kubernetes controllers: HPA scaling, new deployments, cronjobs leading to a possible reset of the consolidateAfter setting, suspending potential disruptions and the incorrect sizing of Karpenter pods. As a result of this last point, It could lead to a conflict between goroutines from different controllers, leading to excessive context switching, which degrades performance. Certain controllers may end up consuming more CPU time, resulting in greater disparity from the expected behavior. I am not sure if this is addressed in the documentation but it would be valuable to outline best practices for users who are unaware of the runtime and code behavior to avoid poor performance or discrepancies in the actions performed by controllers.
Example events observed:
Expected Behavior:
Forceful method enabled (expiration controller): The provisioner controller, particularly the scheduler, should consider the maximum time it may take to drain workloads before creating a replacement NodeClaim. It should also account for the average time required to provision new nodes. For example, if a workload consumes its 3 hours grace period (similar to nodeTerminationGracePeriod) and the average provisioning time for new nodes, the scheduler will create new NodeClaims with enough time before the forced decomission. This ensures the new replacement capacity is available while balancing both costs and reliability.
Forceful method disabled (expiration controller). The controller will respect workloads with PDBs and do-not-disrupt annotations on expired NodeClaims. The provisioner (scheduler) should not identify these pods as reschedulable, preventing the generation of new NodeClaims that will never be used, thus avoiding unnecessary costs.
I have submitted an illustrative PR demonstrating the expected behavior. It’s likely that the code’s current placement is not ideal and should be moved to the expiration or lifecycle controllers. I compiled those modifications and tested them in a development environment. They appear stable although I’m unsure if they might impact any other functionality.
Let me know if there is anything else I can do to help, as this issue is having a significant impact on costs and preventing access to features in EKS 1.31 that are unsupported by earlier v1 releases.
Reproduction Steps (Please include YAML):
Forceful method enabled:
Forceful method disabled:
Versions:
The text was updated successfully, but these errors were encountered: