-
Notifications
You must be signed in to change notification settings - Fork 14.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource Quota Per VolumeAttributesClass #50082
base: dev-1.33
Are you sure you want to change the base?
Resource Quota Per VolumeAttributesClass #50082
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
👷 Deploy Preview for kubernetes-io-vnext-staging processing.
|
708ff9a
to
55ef80c
Compare
✅ Pull request preview available for checkingBuilt without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify site configuration. |
✅ Pull request preview available for checkingBuilt without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify site configuration. |
8338b08
to
69060d9
Compare
/assign |
/hold cancel |
name: pvcs-silver | ||
spec: | ||
hard: | ||
requests.storage: "20Gi" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this quota handling doesn't do anything to prevent creation of more than 20Gi volumes that use VAC silver. Why would we mention it? I am confused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an arbitrary value, not special meaning. The quota will be used in the above example. Do you mean that it is better to use the same hard setting for these quota objects. I'm okay to change it if you think it is better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when you change the desired vac and capacity of the existing pvc, it will be rejected if the new capacity is larger than the quota. In the above example, I didn't show this case. The example is used to tell users which quota is changed or not when the pvc is updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, this looks kinda confusing. If you see scope selector being used here - https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass , for example:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-high
spec:
hard:
cpu: "1000"
memory: "200Gi"
pods: "10"
scopeSelector:
matchExpressions:
- operator: In
scopeName: PriorityClass
values: ["high"]
To me, this reads like - this quota will apply to all mentioned fields (cpu
, memory
and count
) of pods in "high" priority class, not just cpu or memory.
I understand that - we chose not to apply quota based on capacity just yet for VACs, but if within same scope selector, capacity is applied differently from count, then that smells like user experience issue (and a bad one at that). cc @msau42 @deads2k @sunnylovestiramisu @xing-yang
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The question here is that maybe we should block configuring capacity and VAC together in one ResourceQuota? VAC works with Scope but the existing capacity quota does not work with VAC Scope?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I was wrong and may be something was lost in communication - but quota as implemented, does work for both counting and capacity and is scoped to specified scopeSelector
.
For example, give resourcequota:
apiVersion: v1
kind: ResourceQuota
metadata:
name: silver-pvcs
namespace: vim1
spec:
hard:
count/persistentvolumeclaims: "3"
requests.storage: 15Gi
scopeSelector:
matchExpressions:
- operator: In
scopeName: VolumeAttributesClass
values:
- silver
status:
hard:
count/persistentvolumeclaims: "3"
requests.storage: 15Gi
used:
count/persistentvolumeclaims: "2"
requests.storage: 13Gi
I could confirm that, even though there is a "count" capacity available, if I try and create a PVC that exceeds remaining 2GB, then I get quota related errors:
cat csi-pvc.yaml|sed 's/csi-pvc/csi-pvc-4/g'|sed 's/1Gi/4Gi/g'|kubectl create -f -
Error from server (Forbidden): error when creating "STDIN": persistentvolumeclaims "csi-pvc-4-silver" is forbidden: exceeded quota: silver-pvcs, requested: requests.storage=4Gi, used: requests.storage=13Gi, limited: requests.storage=15Gi
I can however create PVCs, larger than remaining capacity, as long as I am not using specified VAC. So this is working as expected.
So tldr; quota handling works for both counting and capacity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if we specify Pod Scope and VolumeAttributesClass Scope in one ResourceQuota? Are they both respected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They don't share any standard resource name, so it should fail to create such quota object. I can not foresee whether they share same resource name in future (It should not in my mind). In quota PR, I didn't add a validation for this case when a quota object is created without any resource name, but it has both above scope. Should I enhance the validation for this case? cc @deads2k
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we adding "." in the key requests.storage
?
This will break JSON path parsing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tengqm This pattern was introduced in kubernetes/kubernetes@55e3824 and kubernetes/kubernetes@09bac89. Related proposal was here: #19761
69060d9
to
56cd64b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lines 575 & 612:
remove backticks enclosing kubectl commands
@rburrs It follows the same style as https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass |
requests.storage 0 30Gi | ||
``` | ||
|
||
Let's change it to "copper" with `kubectl patch`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove backticks enclosing kubectl command
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
requests.storage 0 30Gi | ||
``` | ||
|
||
Once the PVC is bound, it is allowed to modify the desired volume attributes class. Let's change it to "silver" with `kubectl patch`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove backticks enclosing kubectl command
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
57fb863
to
f26e212
Compare
k/k: kubernetes/kubernetes#124360
/hold