-
Notifications
You must be signed in to change notification settings - Fork 14.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes v1.33 Mid Cycle Sneak Peek Blog #50111
base: main
Are you sure you want to change the base?
Conversation
✅ Pull request preview available for checkingBuilt without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify site configuration. |
Sorry for the delay with the write-up, we have put together the initial draft which is now ready for review 🙇 We were working in a separate Google Doc, and it ended up with full of edits and comments, which may make it more difficult to review. I am keeping this a simple PR for the review, but if it would be beneficial to create a separate interactive doc, I can surely do that 👍 Ping @natalisucks @katcosgrove /hold |
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rytswd nice piece!
Just found probably a wrong link copy paste. 👍🏻
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
Co-authored-by: Dipesh Rawat <[email protected]> Co-authored-by: Graziano Casto <[email protected]>
Thanks for the review @dipesh-rawat @graz-dev ! I have applied all the suggestions so far 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some grammatical nits. Looks great otherwise!
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to a Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating a Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the singularity / plurality with "a Pod's container(s)" is quite confusing (and the original wording is already more complex than I like). What do you think updating this to something like the following instead?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, As PodSpec’s Container Resources are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try not to write "PodSpec"; we prefer spec
in backticks separate from Pod in UpperCamelCase.
PodSpec is mostly something you see either as part of the OpenAPI document or in the source code. People operating Kubernetes see spec
and Pod
within manifests and often wouldn't see PodSpec
at all.
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rytswd I added some comments to improve the readability also for unfamiliar readers.
Then I suggest some fixes to stay on track with other "sneak pakes" published for previous releases (see: https://kubernetes.io/blog/2024/11/08/kubernetes-1-32-upcoming-changes/)
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
## A note about User namespace for Pods to be Enabled by Default | ||
|
||
One of the oldest open KEPs today is [KEP-127](https://kep.k8s.io/127), Pod security improvement by using Linux [User namespaces](/docs/concepts/workloads/pods/user-namespaces/) for Pods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha release in v1.25, initial beta in v1.30 (where it was disabled by default), and now is set to be a part of v1.33, where it is available by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here an unfamiliar reader could not know what keps are, so maybe adding a link to https://www.kubernetes.dev/resources/keps/ could help them understand what your are talking about
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or you can move "The Kubernetes API Removal and Deprecation process" and "A note about User namespace for Pods to be Enabled by Default" paragraphs under the "Deprecations and removals for Kubernetes v1.33" as introduction. This in my opinion could make the article even more readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the input! Let me rework the order here -- I agree that it would read better when it talks about the deprecation process, and then some details about the upcoming deprecation(s).
I would probably rephrase the section "A note about ..." to be a bit more engaging to highlight one takeaway from the release as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated this with 8023e57..4cd6cb6 -- so the highlight (i.e. "A note about..." section) is now titled as "Editors' choice". The rest of sneak peek contents have the heading of "Other sneak peek of Kubernetes v1.33". The suffix of "of Kubernetes v1.33" may be redundant, and I might consider dropping that (as you mentioned in the other comment).
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation. | ||
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you planning to maintain this blog post or to have the "Endpoints formally deprecated in favor of EndpointSlices" before this piece is published? If not remove the reverence to a TBC blog post.
If the "Endpoints formally deprecated in favor of EndpointSlices" will be published before this one I think that the best option is:
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). | |
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a [dedicated blog post](TBC). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the idea is to have a dedicated blog post, before this one goes out. But that one is still in draft, and may be tight to get it released before the mid cycle blog goes out. I'll keep it as is for now, but will update according to your suggestion later (this PR is already on hold)
|
||
Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely. | ||
|
||
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) | |
### Host network support for Windows pods |
Add the reference to the KEP in the paragraph instead of into the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) | |
### Removal of host network support for Windows pods |
|
||
The following list of enhancements is likely to be included in the upcoming v1.33 release. This is not a commitment and the release content is subject to change. | ||
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | |
### In-Place vertical Pod scalability with mutable PodSpec for resources |
Add the reference to the KEP in the paragraph instead of into the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | |
### Improvements to in-place vertical scaling for Pods |
The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the process has only little traffic, and even allocating larger resources during the startup and eventually lowering once the initial setup is complete. This has been released as alpha in v1.27, and is expected to land as beta in v1.33. | ||
|
||
|
||
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) | |
### DRA’s ResourceClaim Device Status graduates to beta |
Add the reference to the KEP in the paragraph instead of into the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Watch out for implying that the graduation is definitely going to happen. We don't make promises in the mid-cycle blog unless SIG Architecture would confirm the promise has been made.
You can find more information in [Dynamic Resource Allocation: ResourceClaim Device Status](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaim-device-status). | ||
|
||
|
||
### Ordered Namespace Deletion ([KEP-5080](https://kep.k8s.io/5080)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Ordered Namespace Deletion ([KEP-5080](https://kep.k8s.io/5080)) | |
### Ordered Namespace Deletion |
Add the reference to the KEP in the paragraph instead of into the title.
This KEP introduces a more structured deletion process for Kubernetes namespace to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetes’s security and reliability by mitigating risks associated with non-deterministic deletions. | ||
|
||
|
||
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) | |
### Enhancements to Kubernetes Job Management and Persistent Volume Policies |
Add the reference to the KEP in the paragraph instead of into the title.
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below. Thank you for your continued feedback and support. | ||
|
||
- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it @kubernetes.io or @kubernetesio ? https://bsky.app/profile/kubernetes.io
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not using Bluesky much, but I guess it should be @kubernetes.io
as you say 👍
- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates | |
- Follow us on Bluesky [@kubernetes.io](https://bsky.app/profile/kubernetes.io) for the latest updates |
Co-authored-by: Graziano Casto <[email protected]> Co-authored-by: Nina Polshakova <[email protected]> Co-authored-by: Kat Cosgrove <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took most of the suggestions, but a few things left as is for now
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
|
||
## A note about User namespace for Pods to be Enabled by Default | ||
|
||
One of the oldest open KEPs today is [KEP-127](https://kep.k8s.io/127), Pod security improvement by using Linux [User namespaces](/docs/concepts/workloads/pods/user-namespaces/) for Pods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha release in v1.25, initial beta in v1.30 (where it was disabled by default), and now is set to be a part of v1.33, where it is available by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the input! Let me rework the order here -- I agree that it would read better when it talks about the deprecation process, and then some details about the upcoming deprecation(s).
I would probably rephrase the section "A note about ..." to be a bit more engaging to highlight one takeaway from the release as well.
|
||
## Deprecations and removals for Kubernetes v1.33 | ||
|
||
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit torn about this one -- it is true that you can check the paragraph, and that's the point of having these blogs, making it easier for readers of any level to understand upcoming changes. But for those readers with technical understanding, it would be useful to check out the KEPs to find more.
This is my personal take, but I think KEP is such a great asset Kubernetes community has, and want to make it as accessible as possible. I could take this out from the title, and perhaps put it at the bottom of each section, saying something like "If you want to find more about this, read this KEP" -- what do you think?
|
||
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation. | ||
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the idea is to have a dedicated blog post, before this one goes out. But that one is still in draft, and may be tight to get it released before the mid cycle blog goes out. I'll keep it as is for now, but will update according to your suggestion later (this PR is already on hold)
|
||
Windows Pod networking aimed to achieve feature parity with Linux and provide better cluster density by allowing containers to use Node’s networking namespace. The original implementation landed as alpha with v1.26, but as it faced unexpected containerd behaviours, and alternative solutions were available, it has been decided that the KEP will be withdrawn and the code removed in v1.33. | ||
|
||
## Sneak peek of Kubernetes v1.33 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd keep this as "sneak peek", because, at this point, we don't yet know if these changes would actually land in the v1.33.
"Upcoming changes" may be a good one, but I'm wondering if it loses a bit of fun sense?
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the singularity / plurality with "a Pod's container(s)" is quite confusing (and the original wording is already more complex than I like). What do you think updating this to something like the following instead?
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating container resources allocated to the Pod. Currently, As PodSpec’s Container Resources are immutable, updating any of the Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
|
||
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below. Thank you for your continued feedback and support. | ||
|
||
- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not using Bluesky much, but I guess it should be @kubernetes.io
as you say 👍
- Follow us on Bluesky [@Kubernetesio](https://bsky.app/profile/kubernetes.io) for the latest updates | |
- Follow us on Bluesky [@kubernetes.io](https://bsky.app/profile/kubernetes.io) for the latest updates |
I think I incorporated all of the suggestions so far, or left a comment to discuss further. Please feel free to add more comments / suggestions as you find more! |
|
||
* [Kubernetes v1.30](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md) | ||
|
||
* [Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the most recent three minor releases are the only ones supported, should I drop v1.29 mention here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great team!
One thing I'd add is the expected release date for v1.33 but that's it :)
content/en/blog/_posts/2025-03-24-kubernetes-1.33-sneak-peek.md
Outdated
Show resolved
Hide resolved
Co-authored-by: Grace Nguyen <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Great job everyone!
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: npolshakova The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its looking great, just suggesting some small grammatical nits.
|
||
## The Kubernetes API removal and deprecation process | ||
|
||
The Kubernetes project has a well-documented [deprecation policy](/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release, it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Kubernetes project has a well-documented [deprecation policy](/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release, it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement. | |
The Kubernetes project has a well-documented [deprecation policy](/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release. It will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement. |
|
||
* Beta or pre-release API versions must be supported for 3 releases after the deprecation. | ||
|
||
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. | |
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place. |
Can we simplify this line ? Does it mean if there is different implementation for the same feature already exists, then Alpha or experimental API versions may be removed in any release ?
|
||
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) | ||
|
||
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. The original Endpoints API was simple and straightforward, but also posed some challenges when scaling to large numbers of network endpoints. There have been new Service features only added to EndpointSlices API such as dual-stack networking, making the original Endpoints API ready for deprecation. | |
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. While the original Endpoints API was simple and straightforward, it also posed some challenges when scaling to large numbers of network endpoints. The EndpointSlices API has introduced new features such as dual-stack networking, making the original Endpoints API ready for deprecation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small nits
|
||
## Editors' choice: User namespace for Pods to be enabled by default | ||
|
||
One of the oldest open KEPs today is [KEP-127](https://kep.k8s.io/127), Pod security improvement by using Linux [User namespaces](/docs/concepts/workloads/pods/user-namespaces/) for Pods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha release in v1.25, initial beta in v1.30 (where it was disabled by default), and now is set to be a part of v1.33, where it is available by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the oldest open KEPs today is [KEP-127](https://kep.k8s.io/127), Pod security improvement by using Linux [User namespaces](/docs/concepts/workloads/pods/user-namespaces/) for Pods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha release in v1.25, initial beta in v1.30 (where it was disabled by default), and now is set to be a part of v1.33, where it is available by default. | |
One of the oldest open KEPs today is [KEP-127](https://kep.k8s.io/127), Pod security improvement by using Linux [User namespaces](/docs/concepts/workloads/pods/user-namespaces/) for Pods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha release in v1.25, initial beta in v1.30 (where it was disabled by default), and now is set to be a part of v1.33, where the feature is available by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not sound like the feature will 100% be enabled by default. That's very likely but not a commitment we're announcing.
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | |
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, since PodSpec’s container resources are immutable, any update to Pod’s container resources results in Pod restarting. But what if we could dynamically update the resource configuration for Pods without restarting them? |
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? | ||
|
||
The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the process has only a little traffic, and even allocating larger resources during startup and eventually lowering once the initial setup is complete. This was released as alpha in v1.27, and is expected to land as beta in v1.33. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the process has only a little traffic, and even allocating larger resources during startup and eventually lowering once the initial setup is complete. This was released as alpha in v1.27, and is expected to land as beta in v1.33. | |
The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the traffic is low, and even allocating larger resources during startup that is eventually reduced once the initial setup is complete. This was released as alpha in v1.27, and is expected to land as beta in v1.33. |
|
||
### Enhancements to Kubernetes Job Management and Persistent Volume Policies ([KEP-3850](https://kep.k8s.io/3850), [KEP-3998](https://kep.k8s.io/3998), [KEP-2644](https://kep.k8s.io/2644)) | ||
|
||
These three KEPs are all graduating to GA to provide better reliability for job and storage handling. [KEP-3850](https://kep.k8s.io/3850) provides per-index backoff limits for indexed jobs, while [KEP-3998](https://kep.k8s.io/3998) extends Job API to define conditions for making an indexed job as successfully completed when not all indexes are succeeded. For storage, [KEP-2644](https://kep.k8s.io/2644) ensures the deletion order of PV-PVC pairs to provide reliable resource cleanup in external storage infrastructure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These three KEPs are all graduating to GA to provide better reliability for job and storage handling. [KEP-3850](https://kep.k8s.io/3850) provides per-index backoff limits for indexed jobs, while [KEP-3998](https://kep.k8s.io/3998) extends Job API to define conditions for making an indexed job as successfully completed when not all indexes are succeeded. For storage, [KEP-2644](https://kep.k8s.io/2644) ensures the deletion order of PV-PVC pairs to provide reliable resource cleanup in external storage infrastructure. | |
These three KEPs are all graduating to GA to provide better reliability for job and storage handling. [KEP-3850](https://kep.k8s.io/3850) provides per-index backoff limits for indexed jobs, while [KEP-3998](https://kep.k8s.io/3998) extends Job API to define conditions for making an indexed job as successfully completed when not all indexes are succeeded. For storage, [KEP-2644](https://kep.k8s.io/2644) establishes the deletion order for a PV-PVC pairs to ensure reliable resource cleanup in external storage infrastructure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Here's some further feedback on behalf of the blog team.
I've added a small number of corrective comments on the existing review from @graz-dev but overall please do pay attention to Graziano's feedback - it looks appropriate and relevant.
@@ -0,0 +1,103 @@ | |||
--- | |||
layout: blog |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you put this:
layout: blog | |
layout: blog | |
draft: true |
then we can merge it early and then make a small PR to confirm the publication date.
|
||
## Deprecations and removals for Kubernetes v1.33 | ||
|
||
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try this:
### Deprecate v1.Endpoints ([KEP-4974](https://kep.k8s.io/4974)) | |
## Deprecation of the stable Endpoints API |
This is not really an enhancement, unlike some of the other things we're giving a sneak peek into.
|
||
This deprecation only impacts only those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. You can find more about the deprecation implications and migration plans in a dedicated blog post [Endpoints formally deprecated in favor of EndpointSlices](TBC). | ||
|
||
### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @graz-dev - omit the hyperlink here. Links within headings don't work well, although we do sometimes use them.
I'd write:
### Deprecate `status.nodeInfo.kubeProxyVersion` field ([KEP-4004](https://kep.k8s.io/4004)) | |
### Removal of kube-proxy version information in node status |
I'm afraid to point it out, but: the existing heading is almost misleading: people might think we're deprecating the field, not removing a deprecated field.
|
||
Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `.status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely. | ||
|
||
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Host network support for Windows pods ([KEP-3503](https://kep.k8s.io/3503)) | |
### Removal of host network support for Windows pods |
|
||
Windows Pod networking aimed to achieve feature parity with Linux and provide better cluster density by allowing containers to use the Node’s networking namespace. The original implementation landed as alpha with v1.26, but as it faced unexpected containerd behaviours, and alternative solutions were available, it has been decided that the KEP will be withdrawn and the code removed in v1.33. | ||
|
||
## Editors' choice: User namespace for Pods to be enabled by default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think readers will quickly understand the meaning of this heading.
Try something like:
## Editors' choice: User namespace for Pods to be enabled by default | |
## Featured improvement: support for user namespaces within Linux Pods becomes enabled by default | |
As authors of this article, we picked this as the most significant change to call out. |
or
## Editors' choice: User namespace for Pods to be enabled by default | |
## Featured improvement: support for user namespaces within Linux Pods | |
As authors of this article, we picked this as the most significant change to call out. |
|
||
### DRA’s ResourceClaim Device Status graduates to beta ([KEP-4817](https://kep.k8s.io/4817)) | ||
|
||
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. | |
The `devices` field in with ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. |
|
||
The `Devices` field in `ResourceClaim.Status`, introduced in v1.32, graduates to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities. | ||
|
||
For example, reporting the interface name, MAC address, and IP addresses of network interfaces in the status of a `ResourceClaim` can significantly help in configuring and managing network services, as well as in debugging network related issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, reporting the interface name, MAC address, and IP addresses of network interfaces in the status of a `ResourceClaim` can significantly help in configuring and managing network services, as well as in debugging network related issues. | |
For example, reporting the interface name, MAC address, and IP addresses of network interfaces in the status of a ResourceClaim can | |
significantly help in configuring and managing network services, as well as in debugging network related issues. |
For official announcements I recommend aligning with the style guide pretty closely.
|
||
### In-Place vertical Pod scalability with mutable PodSpec for resources ([KEP-1287](https://kep.k8s.io/1287)) | ||
|
||
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pod’s container(s). Currently, As PodSpec’s Container Resources are immutable, updating Pod’s container resources results in Pod restarts. But what if we could dynamically update the resource configuration for Pods without restarting them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try not to write "PodSpec"; we prefer spec
in backticks separate from Pod in UpperCamelCase.
PodSpec is mostly something you see either as part of the OpenAPI document or in the source code. People operating Kubernetes see spec
and Pod
within manifests and often wouldn't see PodSpec
at all.
|
||
This support will not impact existing Pods unless you manually specify `pod.spec.hostUsers` to opt in. As highlighted in the [v1.30 sneak peek blog](/blog/2024/03/12/kubernetes-1-30-upcoming-changes/), this is an important milestone for mitigating vulnerabilities. | ||
|
||
## Other sneak peek of Kubernetes v1.33 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about:
## Other sneak peek of Kubernetes v1.33 | |
## Selected other Kubernetes v1.33 improvements |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you can wrap the Markdown source, that'll help localization teams.
Add
2025-03-24-kubernetes-1.33-sneak-peek.md
Preview link: https://deploy-preview-50111--kubernetes-io-main-staging.netlify.app/blog/2025/03/24/kubernetes-v1-33-upcoming-changes/