Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Massive scaling and anomalous behaviors with the new forceful disruption method #1928

Open
jorgeperezc opened this issue Jan 24, 2025 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@jorgeperezc
Copy link

jorgeperezc commented Jan 24, 2025

Description

Observed Behavior:
The releases of v1 brings a gift of a new approach regarding the node lifecycle, with graceful and forceful methods for draining and expiration.

Regarding node expiration, according to the Nodepool schema specification, the NodeTerminationGracePeriod property acts as a feature flag to enable a maximum threshold for recycling. If defined as null, the expiration controller will wait indefinitely, respecting do-not-disrupt, PDBs and so on.(Note that I do not remember reading this in the documentation).

  terminationGracePeriod	<string>
    TerminationGracePeriod is the maximum duration the controller will wait
    before forcefully deleting the pods on a node, measured from when deletion
    is first initiated.
    
    [...]

    The feature can also be used to allow maximum time limits for long-running
    jobs which can delay node termination with preStop hooks.
    If left undefined, the controller will wait indefinitely for pods to be
    drained.

Having said that, two erratic behaviors can be observed:

  • Forceful method enabled. The terminationGracePeriod property is defined as the maximum grace period threshold for draining the node's pods. When the expiration of NodeClaims begins (TTL specified in the expireAfter setting), they are marked with the annotation karpenter.sh/nodeclaim-termination-timestamp, indicating the maximum datetime for decommissioning, and the grace period countdown starts. The affected node workloads, regardless of PDBs and the do-not-disrupt annotation, are identified by the provisioner controller as reschedulable pods, causing the scheduler to determine whether to generate a new NodeClaim as a replacement based on the available capacity. We have use cases with extended grace periods and workloads with significant sizing but the scheduler does not consider the potential maximum grace period, provisioning replacements that might not be used until the application terminates. Additionally, there are pods nominated to be scheduled on the newly provisioned NodeClaims, blocking possible disruptions, lack of synchronization of the cluster state with the in-memory snapshot of Karpenter and extensive enqueuing of reconciliations by the provisioner, creating the perfect ingredients for a disaster. I believe it does not make sense to flip between provisioning and consolidation with resources that may not be used, leading to unnecessary costs. For example, jobs with a TTL of 5 hours that could use the entire grace period but from t0 already have an unused replacement. Aggressive consolidation budgets tend to worsen the situation, leading to more chaos.

  • Forceful method disabled. The terminationGracePeriod property is left undefined, which generates behavior similar to previous releases of v1 where PDBs and do-not-disrupt annotations are respected, causing the controller to wait indefinitely for the expired NodeClaims workloads to be drained. There are scenarios where this behavior is desired to minimize maximum disruption. In this case, an anomalous behavior occurs similar to the one mentioned earlier, with the difference that pods that cannot be drained are identified as reschedulable pods, leading to the provisioning of new NodeClaims that will never be used. The same flipping behavior persists along with the possibility risk of massive, uncontrolled scaling.

In addition to everything already mentioned, we must also consider the entropy generated by Kubernetes controllers: HPA scaling, new deployments, cronjobs leading to a possible reset of the consolidateAfter setting, suspending potential disruptions and the incorrect sizing of Karpenter pods. As a result of this last point, It could lead to a conflict between goroutines from different controllers, leading to excessive context switching, which degrades performance. Certain controllers may end up consuming more CPU time, resulting in greater disparity from the expected behavior. I am not sure if this is addressed in the documentation but it would be valuable to outline best practices for users who are unaware of the runtime and code behavior to avoid poor performance or discrepancies in the actions performed by controllers.

Example events observed:

Events:
  Type    Reason       Age       From       Message
  ----    ------       ----      ----       -------
  Normal   Nominated   38m     karpenter    Pod should schedule on: nodeclaim/test-5tu1t
  Normal   Nominated   27m     karpenter    Pod should schedule on: nodeclaim/test-qqkw6
  Normal   Nominated   16m     karpenter    Pod should schedule on: nodeclaim/test-nshgw
  Normal   Nominated   5m44s   karpenter    Pod should schedule on: nodeclaim/test-0tgjd

Events:
  Type    Reason           Age                  From       Message
  ----    ------           ----                 ----       -------
Normal  DisruptionBlocked  14s (x4 over 4m17s)  karpenter  Cannot disrupt NodeClaim: state node is nominated for a pending pod

Expected Behavior:

  • Forceful method enabled (expiration controller): The provisioner controller, particularly the scheduler, should consider the maximum time it may take to drain workloads before creating a replacement NodeClaim. It should also account for the average time required to provision new nodes. For example, if a workload consumes its 3 hours grace period (similar to nodeTerminationGracePeriod) and the average provisioning time for new nodes, the scheduler will create new NodeClaims with enough time before the forced decomission. This ensures the new replacement capacity is available while balancing both costs and reliability.

  • Forceful method disabled (expiration controller). The controller will respect workloads with PDBs and do-not-disrupt annotations on expired NodeClaims. The provisioner (scheduler) should not identify these pods as reschedulable, preventing the generation of new NodeClaims that will never be used, thus avoiding unnecessary costs.

I have submitted an illustrative PR demonstrating the expected behavior. It’s likely that the code’s current placement is not ideal and should be moved to the expiration or lifecycle controllers. I compiled those modifications and tested them in a development environment. They appear stable although I’m unsure if they might impact any other functionality.

Let me know if there is anything else I can do to help, as this issue is having a significant impact on costs and preventing access to features in EKS 1.31 that are unsupported by earlier v1 releases.

Reproduction Steps (Please include YAML):

  • Forceful method enabled:

    • Generate a statefulSet with the do-not-disrupt annotation set to true and a terminationGracePeriodSeconds with a wide window. Keep in mind that, although the process may capture SIGTERM, it should continue running without terminating immediately to simulate the described behavior.
    • The nodeTerminationGracePeriod property of the NodePool should be equal to or greater than the terminationGracePeriodSeconds of the StatefulSet.
    • Define expireAfter in the NodePool to ensure the NodeClaims hosting the pods from your StatefulSet expire.
  • Forceful method disabled:

    • Generate a statefulSet with the do-not-disrupt annotation set to true.
    • Do not define a value for the nodeTerminationGracePeriod property of the NodePool or explicitly set it to null.
    • Define expireAfter in the NodePool to ensure the NodeClaims hosting the pods of your StatefulSet expire.

Versions:

  • Karpenter v1.0.7
  • Chart Karpenter Version: 1.0.7
  • Chart Karpenter CRDs Version: 1.0.7
  • EKS v1.29
  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@jorgeperezc jorgeperezc added the kind/bug Categorizes issue or PR as related to a bug. label Jan 24, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

2 participants