Pods on control plane node enters "CreateContainerConfigError" on upgrading to k8s 1.31.x #3575
Labels
kind/bug
Categorizes issue or PR as related to a bug.
sig/cluster-management
Denotes a PR or issue as being assigned to SIG Cluster Management.
Milestone
What happened?
While upgrading the cluster to k8s 1.31.x many pods on the control plane node started showing
CreateContainerConfigError
as the status (CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars). The issue seems to be similar to this upstream issue 127316. The issue surfaces when kubelet tries to connect with an API server which is on a lower version than kubelet itself, thus violating the skew policy with k8s 1.31 new field selectors were introduced which isn't compatible with older versions & thus causing the issue.Expected behavior
Upgrade to k8s 1.31.x should happen without causing pods to enter an error state on any of the control plane nodes.
How to reproduce the issue?
Try to upgrade to k8s 1.31.x from previous minor k8s versions using kubeone 1.9.x. The issue will show up for one or more control plane nodes. Sometimes it might not cause any issues if by chance the kubelet always connects with an apiserver which is on the same or later versions than itself.
What KubeOne version are you using?
v1.9
Additional information
On kubeadm, there's a flag named
ControlPlaneKubeletLocalMode
that ensures that kubelet always connects with local api-server, maybe kubeone can leverage that.Internal reference - 7793
The text was updated successfully, but these errors were encountered: