Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Server Side Apply for Client Patch Calls #779

Open
jonathan-innis opened this issue Nov 11, 2023 · 11 comments
Open

Enable Server Side Apply for Client Patch Calls #779

jonathan-innis opened this issue Nov 11, 2023 · 11 comments
Assignees
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. operational-excellence triage/accepted Indicates an issue or PR is ready to be actively worked on. v1.x Issues prioritized for post-1.0

Comments

@jonathan-innis
Copy link
Member

jonathan-innis commented Nov 11, 2023

Description

What problem are you trying to solve?

Karpenter currently doesn't use Server Side Apply when making patch calls to the apiserver. This doesn't cause a ton of issues today since there aren't a ton of writers to the objects that we own (although the Node is a bit of an exception to this rule)

We should use Server Side Apply since then we can declare a set of managed fields that we own and ensure that only these fields are updated when we make a Patch call to the apiserver and no other updates are made to the object.

Server Side Apply Docs: https://kubernetes.io/docs/reference/using-api/server-side-apply/

How important is this feature to you?

It will prevent us from fighting and overwriting with other controllers that may also be trying to write different fields to the same object.

Related Issues

#660

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@jonathan-innis jonathan-innis added operational-excellence kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. v1.x Issues prioritized for post-1.0 labels Nov 11, 2023
@sadath-12
Copy link
Contributor

/assign

@sadath-12
Copy link
Contributor

sadath-12 commented Nov 30, 2023

Hi @jonathan-innis ,Tried looking at it I guess it considers our crd's as unstructured object . so we would have to make some more changes such as converting the crd resources into unstructured and also pulling dynamicclient and then applying them and since we have multiple controllers we really need to very cautions about the ownership which might be a bit prone to errors . One example I've found is https://github.com/pkbhowmick/k8s-server-side-apply/blob/master/main.go

@jonathan-innis
Copy link
Member Author

I'd consider looking deeper into how controller-runtime directly handles this as well as look into how CAPI (Cluster API) does SSA. I believe they should have paths forward to enable us to do the same in Karpenter.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 3, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@jonathan-innis
Copy link
Member Author

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Jun 3, 2024
@k8s-ci-robot
Copy link
Contributor

@jonathan-innis: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jun 3, 2024
@jonathan-innis
Copy link
Member Author

/triage accepted
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 3, 2024
@jonathan-innis
Copy link
Member Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. operational-excellence triage/accepted Indicates an issue or PR is ready to be actively worked on. v1.x Issues prioritized for post-1.0
Projects
None yet
Development

No branches or pull requests

4 participants