AEP-7571: Pod-level resources support in VPA#8586
AEP-7571: Pod-level resources support in VPA#8586iamzili wants to merge 22 commits intokubernetes:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: iamzili The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @iamzili. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
|
||
| ## Summary | ||
|
|
||
| Starting with Kubernetes version 1.34, it is now possible to specify CPU and memory `resources` for Pods at the pod level in addition to the existing container-level `resources` specifications. For example: |
There was a problem hiding this comment.
It may be worth linking the KEP here
There was a problem hiding this comment.
I'm linking the KEP and the official blog post a little further down: here
vertical-pod-autoscaler/enhancements/7571-support-pod-level-resources/README.md
Outdated
Show resolved
Hide resolved
|
|
||
| This section describes how VPA reacts based on where resources are defined (pod level, container level or both). | ||
|
|
||
| Before this KEP, the recommender computes recommendations only at the container level, and VPA applies changes only to container-level fields. With this proposal, the recommender also computes pod-level recommendations in addition to container-level ones. Pod-level recommendations are derived from per-container usage and recommendations, typically by aggregating container recommendations. Container-level policy still influences pod-level output: setting `mode: Off` in `spec.resourcePolicy.containerPolicies` excludes a container from recommendations, and `minAllowed`/`maxAllowed` bounds continue to apply. |
There was a problem hiding this comment.
Just want to sanity check this a little.
typically by aggregating container recommendations
From what I can tell, the metric that metric-server provides is per-container.
So the idea is to leave the recommender as is, making per-container recommendations based on its per-container metric, and let the updater/admission-controller use an aggregated value for the Pod resources.
Is my understanding here right?
There was a problem hiding this comment.
Partially, since the recommender will calculate the pod-level recommendations (from your comment, it seems that the updater/admission controller would do that). My plan is to continue relying on the current approach for collecting and aggregating container-level metrics, as well as for generating per-container recommendations.
The difference introduced by this AEP is that if a pod-level resources stanza is defined at the workload API level, the recommender will also calculate pod-level recommendations, which are simply the sum of the container recommendations. The pod-level recommendations will be stored in the status.recommendation.podRecommendation stanza of the VPA object (new!).
The updater and the admission controller will read from status.recommendation.podRecommendation (and of course from status.recommendation.containerRecommendations) to perform their actions - the updater will evict pods or perform in-place container-level updates, while the admission controller will modify pod specs on the fly.
There was a problem hiding this comment.
Partially, since the recommender will calculate the pod-level recommendations (from your comment, it seems that the updater/admission controller would do that).
I was just making an assumption. If I'm hearing you right, you want the recommender to create the pod recommendation, and store it in the VPA resource, which makes more sense than my assumption.
The difference introduced by this AEP is that if a pod-level resources stanza is defined at the workload API level, the recommender will also calculate pod-level recommendations, which are simply the sum of the container recommendations. The pod-level recommendations will be stored in the
status.recommendation.podRecommendationstanza of the VPA object (new!).The updater and the admission controller will read from
status.recommendation.podRecommendation(and of course fromstatus.recommendation.containerRecommendations) to perform their actions - the updater will evict pods or perform in-place container-level updates, while the admission controller will modify pod specs on the fly.
Makes sense!
| - Extend the VPA object: | ||
| 1. Add a new `spec.resourcePolicy.podPolicies` stanza. This stanza is user-modifiable and allows setting constraints for pod-level recommendations: | ||
| - `controlledResources`: Specifies which resource types are recommended (and possibly applied). Valid values are `cpu`, `memory`, or both. If not specified, both resource types are controlled by VPA. | ||
| - `controlledValues`: Specifies which resource values are controlled. Valid values are `RequestsAndLimits` and `RequestsOnly`. The default is `RequestsAndLimits`. | ||
| - `minAllowed`: Specifies the minimum resources that will be recommended for the Pod. The default is no minimum. | ||
| - `maxAllowed`: Specifies the maximum resources that will be recommended for the Pod. The default is no maximum. To ensure per-container recommendations do not exceed the Pod's defined maximum, apply the formula to adjust the recommendations for containers proposed by @omerap12 (see [discussion](https://github.com/kubernetes/autoscaler/issues/7147#issuecomment-2515296024)). This field takes precedence over the global Pod maximum set by the new flags (see "Global Pod maximums"). | ||
| 2. Add a new `status.recommendation.podRecommendation` stanza. This field is not user-modifiable, it is populated by the VPA recommender and stores the Pod-level recommendations. The updater and admission controller use this stanza to read Pod-level recommendations. The updater may evict Pods to apply the recommendation, the admission controller applies the recommendation when the Pod is recreated. |
There was a problem hiding this comment.
Would it be possible to have an example Go Type here?
|
|
||
| ## Proposal | ||
|
|
||
| - Add a new feature flag named `PodLevelResources`. Because this proposal introduces new code paths across all three VPA components, this flag will be added to each component. |
There was a problem hiding this comment.
Is this a feature flag to assist with GAing the feature, or is it a flag to enable/disable the feature?
There was a problem hiding this comment.
My intention is to use the flag to enable or disable the feature. In other words, the feature should be disabled by default at first, and once the feature matures, it can be enabled by default starting from a specific VPA version.
Could you please clarify what you mean by using the flag for GAing the feature?
There was a problem hiding this comment.
The normal pattern for Kubernetes is to use a feature gate to introduce a new feature. Normally it works like this across many releases:
- First release - add a feature gate as alpha - defaulted to off
- Second release - promote to beta - default to on
- Third release - promote to GA - locked to on
- A few releases later (3 I think) - remove feature gate logic completely
This is mostly for the kubernetes components to handle roll forward/back gracefully.
I think the main thing it protects is if a user starts using the feature in the beta mode, if they roll back 1 release, that feature would continue to work (ie: the APIs would be valid) since the logic exists in the alpha mode.
There was a problem hiding this comment.
Thanks for the explanation - I appreciate it! Based on your comment, the feature flags (there will be a new one for each component) will serve both purposes, i.e. GAing and enabling/disabling the feature.
There was a problem hiding this comment.
Right, the point of feature gates in Kubernetes is to eventually remove them. enabling/disabling the feature should be driven by the API
|
|
||
| For workloads that define only pod-level resources, VPA will control resources at the pod level. At the time of writing, [in-place pod-level resource resizing](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/5419-pod-level-resources-in-place-resize) is not available for pod-level fields, so applying pod-level recommendations requires evicting Pods. | ||
|
|
||
| When [in-place pod-level resource resizing](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/5419-pod-level-resources-in-place-resize) becomes available, VPA should attempt to apply pod-level recommendations in place first and fall back to eviction if in-place updates fail, mirroring the current `InPlaceOrRecreate` behavior used for container-level updates. |
There was a problem hiding this comment.
Because this AEP has a dependency on the functionality described in https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/5419-pod-level-resources-in-place-resize, can we restate the language as if https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/5419-pod-level-resources-in-place-resize is already implemented, and then add a note that we won't approve this AEP until post-1.35 (when in-place resizing of pod-level resources has been implemented)?
There was a problem hiding this comment.
I thought that this AEP isn't dependant on that feature, it's calling out that we can't do in-place resizing until that KEP is ready
There was a problem hiding this comment.
Let's remove this section, there is no connection between the current AEP and the in-place feature.
This AEP should focus on pod level resources only.
There was a problem hiding this comment.
as @omerap12 suggested, I removed the parts mentioning the In-Place Pod-Level Resources Resize, and kept only a note stating that we should leverage it once it becomes available
Feel free to resolve the conversation if applicable.
omerap12
left a comment
There was a problem hiding this comment.
Really thanks for the hard work here Erik!
To my opinion I think we should choose option 2 as the default (control both pod-level and initially set container-level resources) here.
I left couple of notes throughout the proposal.
Can we please remove the in-place feature from this AEP?
This AEP should focus only on pod-level resource so cons like "Applying both pod-level and container-level recommendations requires eviction because is not yet available" are redundant.
vertical-pod-autoscaler/enhancements/7571-support-pod-level-resources/README.md
Outdated
Show resolved
Hide resolved
| - `controlledResources`: Specifies which resource types are recommended (and possibly applied). Valid values are `cpu`, `memory`, or both. If not specified, both resource types are controlled by VPA. | ||
| - `controlledValues`: Specifies which resource values are controlled. Valid values are `RequestsAndLimits` and `RequestsOnly`. The default is `RequestsAndLimits`. | ||
| - `minAllowed`: Specifies the minimum resources that will be recommended for the Pod. The default is no minimum. | ||
| - `maxAllowed`: Specifies the maximum resources that will be recommended for the Pod. The default is no maximum. To ensure per-container recommendations do not exceed the Pod's defined maximum, apply the formula to adjust the recommendations for containers proposed by @omerap12 (see [discussion](https://github.com/kubernetes/autoscaler/issues/7147#issuecomment-2515296024)). This field takes precedence over the global Pod maximum set by the new flags (see "Global Pod maximums"). |
There was a problem hiding this comment.
Thanks for catching that! (I forgot I wrote that TBH ) :)
There was a problem hiding this comment.
My formula should be correct, but what happens if after the normalization of the container[i] resources we get a value which is little/bigger than the minAllowed/maxAllowed?
I thought we can do something like that:
- If adjusted[i] < container.minAllowed[i]: set to minAllowed[i]
- If adjusted[i] > container.maxAllowed[i]: set to maxAllowed[i]
And then we need to re-check pod limits after container policy adjustments ( since it might be bigger ).
If we are still exceeding pod limits - what we wanna do here?
cc @adrianmoisey
Sorry if I wasn't clear enough :)
There was a problem hiding this comment.
An individual container limit can't be larger than the pod-level limit, but the aggregated container-level limits can exceed the pod-level limit - Ref.
So, when a new pod-level recommendation is calculated and the limit is set proportionally at the pod level, we also need to check the container-level limits. If a container-level limit is greater than the pod-level limit, it should be set to the same value as the pod-level limit, and the calculated container-level recommendation should be reduced proportionally as well to maintain the original request to limit ratio (similar to how it works when a LimitRange API object is in place).
|
|
||
| ### Test Plan | ||
|
|
||
| TODO |
There was a problem hiding this comment.
In order for this AEP to merged this has to be filled ( I know it's a WIP but just a remainder ) :)
There was a problem hiding this comment.
the missing test plan has been addressed
I would also prefer option 2 (control both pod-level and initially set container-level resources). BTW when do you think a decision will be made to go with this option? We will need to update the AEP to reflect the chosen approach. Once the decision is final, I also plan to add more details. Furthermore, why are you suggesting that the pod-level resources in-place resize related parts should be removed from this AEP? Since this AEP focuses on the pod-level resources stanza, how it can be mutated (or not) seems relevant from the VPA's perspective |
I see your point - it’s not completely independent, since once Kubernetes supports in-place updates for pod-level resources, the VPA will likely extend that support as well (similar to what we already do for container-level in-place updates). But, the main scope of this AEP is to define how we provide recommendations for pod-level resources. The actual application of those recommendations - whether in-place or through eviction - is more of an implementation detail and doesn’t directly affect the design decisions in this proposal. |
vertical-pod-autoscaler/enhancements/7571-support-pod-level-resources/README.md
Outdated
Show resolved
Hide resolved
|
/release-note-none |
|
@jackfrancis: you can only set the release note label to release-note-none if the release-note block in the PR body text is empty or "none". DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/ok-to-test |
|
pinging this again, since VPA support is a blocker for Pod Level Resources GA. |
|
@omerap12: GitHub didn't allow me to request PR reviews from the following users: iamzili. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
d711b01 to
62314bb
Compare
|
FYI – I have finished updating the AEP, it is ready for review. Thanks in advance! |
omerap12
left a comment
There was a problem hiding this comment.
Thanks for working on this! these are my initial thoughts
| 1. Extend the [GetUpdatePriority](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/updater/priority/priority_processor.go#L45) method to also evaluate pod-level recommendations. The updated method verifies whether pod-level recommendations fall outside the recommended range and calculates the pod-level `resourceDiff`. These checks occur only when container-level recommendations do not set `OutsideRecommendedRange` to true and the container-level `resourceDiff` remains below the threshold. | ||
| 2. `[DOESN'T CHANGE]` When the updater adds a Pod to the `UpdatePriorityCalculator`, it marks the Pod for eviction or in-place update based on the VPA mode: | ||
| 1. `[DOESN'T CHANGE]` If the updater evicts the Pod, control passes to the admission controller, and the updater proceeds to the next Pod in the list. | ||
| 2. If the updater selects the Pod for an in-place update, it applies the pod-level and container-level recommendations directly to the running Pod using the in-place mechanism, based on the presence of resource requests in the Pod spec. The algorithm follows the approach proposed in the admission controller subsection - see [Patch Generation Algorithm](#patch-generation-algorithm). If the in-place update fails, the updater falls back to the eviction path. |
There was a problem hiding this comment.
So we are not changing anything here, right?
To my understanding, this means we calculate the target recommendation for all containers in a pod and then check if the total is different from the current pod resources (by resourceDiff and more).
Maybe we can remove the long paragraph and just keep this one simple sentence?
|
|
||
| This behavior is also described in the [Pod-level Resource Spec KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2837-pod-level-resource-spec/README.md#admission-controller), which suggests placing Pods in namespaces without any Container LimitRange objects. Therefore this AEP proposes that creation of Pods with Pod-level resources in namespaces containing Container LimitRange objects should be rejected. This validation is further detailed in the [Validation section](#dynamic-validation). | ||
|
|
||
| In summary Pods with Pod-level resources should not be validated against Container LimitRange objects. |
There was a problem hiding this comment.
So are we are saying we would not support that? (It's fine with me as long is it being documented).
There was a problem hiding this comment.
yes, at this stage (and this may change in the future if the LimitRanger admission controller updates its behavior for pods with a pod-level resources stanza) we should skip the Pod in both the updater and the admission controller when:
- the Pod template defines pod-level resources, and
- the Pod is deployed into a namespace that contains container scoped Limitrange objects
However, I don't agree with the current wording in the AEP, as we cannot simply reject Pod creation. Instead, I propose updating the AEP to state that in this situation the updater and the admission controller will skip managing the Pod and emit a log message explaining why (both components), advising the user to move the Pod to a namespace without container level Limitrange object
| - containerName: "c2" | ||
| mode: "Off" | ||
| - containerName: "c3" | ||
| mode: "Auto" |
There was a problem hiding this comment.
Since Auto is deprecated I would like it not to appear in AEPs and such. let's switch it to whatever.
There was a problem hiding this comment.
this is not the VPA mode you are referring to, it is the mode defined under containerPolicies
There was a problem hiding this comment.
There was a problem hiding this comment.
but I mention the auto vpa mode under the Goals section
| The following example illustrates a more appropriate configuration. The user chooses not to calculate recommendations for the `sidecar` container. As a result, VPA excludes the `sidecar` container from pod-level recommendation calculations. VPA also omits container-level recommendations for this container from the VPA status stanza: | ||
|
|
||
| ```yaml | ||
| # Target Pod has three containers (c1, c2, and sidecar). | ||
| # Valid VPA object | ||
| apiVersion: autoscaling.k8s.io/v1 | ||
| kind: VerticalPodAutoscaler | ||
| metadata: | ||
| name: workload1 | ||
| namespace: default | ||
| spec: | ||
| targetRef: | ||
| apiVersion: apps/v1 | ||
| kind: Deployment | ||
| name: workload1 | ||
| updatePolicy: | ||
| updateMode: 'Recreate' | ||
| resourcePolicy: | ||
| containerPolicies: | ||
| - containerName: "sidecar" | ||
| mode: "Off" | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Can we merge this example with the above example? I am afraid this document becomes to excessive
|
|
||
| `PodLevelResources` (i.e. the new VPA flag) is supported starting from Kubernetes v1.34, where the beta version of the [PodLevelResources](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#PodLevelResources) feature gate is enabled by default. In this version, all VPA modes are supported except `InPlaceOrRecreate`. | ||
|
|
||
| To use the `InPlaceOrRecreate` VPA mode, Kubernetes v1.35 or later is required, and the alpha feature gate [InPlacePodLevelResourcesVerticalScaling](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#InPlacePodLevelResourcesVerticalScaling) must be enabled (introduced in [In-Place Pod-Level Resources Resizing](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/5419-pod-level-resources-in-place-resize)). If this feature gate is disabled or the user runs an earlier Kubernetes version, in-place updates for pod-level resources will fail, and the updater will fall back to eviction. |
There was a problem hiding this comment.
Why would the updater will not fallback to eviction? How is it differ from containers when we fallback to eviction?
There was a problem hiding this comment.
I'm not sure I fully understand the comment from above:
- what I'm proposing is that at this line, if
erris non-nil, the pod is evicted whenInPlaceOrRecreatemode is used - in other words "we fallback to eviction" - what do you mean by "containers when we fall back to eviction"?
| ##### Option 1: VPA Controls Only Pod-Level Resources | ||
|
|
||
| With this option, VPA manages only the pod-level resources stanza. To follow this approach, the initially defined container-level resources for `ide` must be removed so that changes in usage are reflected only in pod-level recommendations. | ||
|
|
||
| **Pros**: | ||
| * VPA does not need to track which container-level resources were initially set. | ||
| * Enables shared headroom across containers in the same Pod. With container-only limits, a sidecar (`tool1` or `tool2`) or the main workload (`ide` container) hitting its own CPU limit could get throttled even if other containers in the Pod have idle CPU. Pod-level resources allow a container experiencing a spike to access idle resources from others, optimizing overall utilization. | ||
|
|
||
| **Cons**: | ||
| * A downside of this approach is that the most important container (`ide`) may be recreated without container-level resources, leading to an `oom_score_adj` that matches other sidecars in the Pod, as a result the OOM killer may target all containers more evenly under node memory pressure. For details on how `oom_score_adj` is computed when pod-level resources are present, see the [KEP section](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2837-pod-level-resource-spec/README.md#oom-score-adjustment) on OOM score adjustment. |
There was a problem hiding this comment.
Since this is not the selected option, can we just mention the option and briefly explain why it was not chosen?
There was a problem hiding this comment.
The formal KEP process has an "alternatives" option: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template#alternatives
|
/cc @ndixita |
vertical-pod-autoscaler/enhancements/7571-support-pod-level-resources/README.md
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/enhancements/7571-support-pod-level-resources/README.md
Outdated
Show resolved
Hide resolved
| labelSetKey labelSetKey | ||
| // Containers that belong to the Pod, keyed by the container name. | ||
| Containers map[string]*ContainerState | ||
| // Current Pod-level requests (!NEW) |
There was a problem hiding this comment.
Spacing here is a little weird
| ##### Option 1: VPA Controls Only Pod-Level Resources | ||
|
|
||
| With this option, VPA manages only the pod-level resources stanza. To follow this approach, the initially defined container-level resources for `ide` must be removed so that changes in usage are reflected only in pod-level recommendations. | ||
|
|
||
| **Pros**: | ||
| * VPA does not need to track which container-level resources were initially set. | ||
| * Enables shared headroom across containers in the same Pod. With container-only limits, a sidecar (`tool1` or `tool2`) or the main workload (`ide` container) hitting its own CPU limit could get throttled even if other containers in the Pod have idle CPU. Pod-level resources allow a container experiencing a spike to access idle resources from others, optimizing overall utilization. | ||
|
|
||
| **Cons**: | ||
| * A downside of this approach is that the most important container (`ide`) may be recreated without container-level resources, leading to an `oom_score_adj` that matches other sidecars in the Pod, as a result the OOM killer may target all containers more evenly under node memory pressure. For details on how `oom_score_adj` is computed when pod-level resources are present, see the [KEP section](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2837-pod-level-resource-spec/README.md#oom-score-adjustment) on OOM score adjustment. |
There was a problem hiding this comment.
The formal KEP process has an "alternatives" option: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template#alternatives
| - Sidecars, such as logging agents or mesh proxies (like `tool1` or `tool2`), that don't use container-level limits can borrow idle CPU from other containers in the pod when they experience a spike in usage. Pod-level resources allow a container experiencing a spike to access idle resources from others, optimizing overall utilization. | ||
|
|
||
| **Cons**: | ||
| - Existing VPA users may find the behavior surprising because VPA does not control all container-level resources stanzas - only those initially set. |
There was a problem hiding this comment.
Hmmm, interesting con.
I believe this may go against the existing API though:
The current default is to control all containers, regardless of if their resources are set or not
There was a problem hiding this comment.
Yes, but the current default behavior - controlling all container-level resources - does not make sense when pod-level resources are present. For example, this does not look correct, and it is not what users expect (what is the benefit of using pod-level resources here?):
from this:
kind: Pod
apiVersion: v1
metadata:
namespace: default
name: mypod
spec:
resources:
requests:
cpu: 200m
memory: "200Mi"
containers:
- name: c1
image: registry.k8s.io/pause:3.1
- name: c2
image: registry.k8s.io/pause:3.1this should not become:
kind: Pod
apiVersion: v1
metadata:
namespace: default
name: mypod
spec:
resources:
requests:
cpu: 200m
memory: "200Mi"
containers:
- name: c1
image: registry.k8s.io/pause:3.1
resources:
requests:
cpu: 180m
memory: "180Mi"
- name: c2
image: registry.k8s.io/pause:3.1
resources:
requests:
cpu: 20m
memory: "20Mi"There was a problem hiding this comment.
Yes, but the current default behavior - controlling all container-level resources - does not make sense when pod-level resources are present. For example, this does not look correct, and it is not what users expect (what is the benefit of using pod-level resources here?):
Right, I agree that this is strange, but, it's what the user asked for (using the existing defaults set for container resources, and opting in to pod level resources)
I think either way, if the behaviour is surprising, we should try change the API such that the behaviour isn't surprising anymore.
What if we had a way to select if you wanted container, pod or both controlled?
May be something living inside of spec.updatePolicy ?
|
|
||
| When a workload is created without any resources defined at either the pod or container level, there are two options: | ||
|
|
||
| ##### [Selected] Option 1: VPA controls only the container-level resources |
There was a problem hiding this comment.
This seems to contradict the defaults set in the API:
- Add a new
spec.resourcePolicy.podPoliciesstanza. This stanza is user-modifiable and allows setting constraints for pod-level recommendations:
-controlledResources: Specifies which resource types are recommended (and possibly applied). Valid values arecpu,memory, or both. If not specified, both resource types are controlled by VPA.
My opinion is this:
Extend the API to include a "podPolicies", which has a "controlledResources" that defaults to nothing.
If a user wants to opt-in to pod level resources, they need to set spec.resourcePolicy.podPolicies.controlledResources to both memory and cpu.
The awkward part here is that container resources are enabled by default and pod resources are disabled by default. We could chat to someone in api-machinery about how we can safely roll this out with both pod and container being set, without breaking backwards compatibility.
There was a problem hiding this comment.
actually this makes sense!
So you are suggesting that the pod-level and container-level resources present in the pod spec would still be relevant, but only because of the request-to-limit ratio. When a user wants VPA to manage pod-level resources, they need to specify this in the new podPolicies.controlledResources stanza (defaults to nothing), is my understanding correct?
Co-authored-by: Adrian Moisey <adrian@changeover.za.net>
|
@adrianmoisey and @omerap12 I'm requesting a new review for this AEP. I did a refactor on this AEP based on the ongoing code implementation PR which is 95% ready. I removed tons of stuff from the documentation, so it is not that dense that it was before, furthermore I reworked the whole mechanism as well. Thanks in advance! |
What type of PR is this?
/kind documentation
/kind feature
/area vertical-pod-autoscaler
What this PR does / why we need it:
Autoscaling Enhancement Proposal (AEP) for pod-level resources support in VPA.
Related ticket from which this AEP originated: Issue
More details about pod-level resources can be found here:
I'd love to hear your thoughts on this feature.