fix global downstream max connections behaviour#58666
fix global downstream max connections behaviour#58666istio-testing merged 2 commits intoistio:masterfrom
Conversation
Signed-off-by: Rama Chavali <rama.rao@salesforce.com>
| "overload_manager": { | ||
| "resource_monitors": [ | ||
| { | ||
| "name": "envoy.resource_monitors.global_downstream_max_connections", | ||
| "typed_config": { | ||
| "@type": "type.googleapis.com/envoy.extensions.resource_monitors.downstream_connections.v3.DownstreamConnectionsConfig", | ||
| "max_active_downstream_connections": 2147483647 | ||
| "max_active_downstream_connections": {{.global_downstream_max_connections}} |
There was a problem hiding this comment.
The connection monitor is still always configured, even with the default value (
Maybe Istio should not add this monitor at all if there is "no limit" set? Setting MAXINT is pretty much just like having no limit at all.
In case one was to explicitly configure the overload manager config via custom bootstrap config (see #28302 -> #56149) there still is a collision. Currently there only is max_active_downstream_connections as parameter (https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/resource_monitors/downstream_connections/v3/downstream_connections.proto#envoy-v3-api-msg-extensions-resource-monitors-downstream-connections-v3-downstreamconnectionsconfig), so it's only about having a split overload config then (downstream_connections monitor set via downstream_connections and the rest via custom bootrap config, not pretty, but still functional.
But in case there were new parameters or different resource monitors for connection added to envoy, there is not way to completely remove this static block here and configure overload manager explicitly.
And having native exposure of the overload manager settings seems still be in high demand, see #14366.
There was a problem hiding this comment.
even with the default value (
Maybe Istio should not add this monitor at all if there is "no limit" set? Setting MAXINT is pretty much just like having no limit at all.
We can not remove because the reason why we added this limit to get away from Envoy warning. So this PR tried to match the existing behaviour
. But in case there were new parameters or different resource monitors for connection added to envoy, there is not way to completely remove this static block here and configure overload manager explicitly.
And having native exposure of the overload manager settings seems still be in high demand, see #14366.
Yes. We need a proper API. I tried to fix the current issue as it is kind of regression without introducing API change. Once this is merged, I will work on proper API
There was a problem hiding this comment.
Thanks @ramaraochavali ! You seem to be quite on top on things. Thanks for your time and effort on this one!
There was a problem hiding this comment.
istio/api#3627 may be an interesting place to align efforts on a proper API for overload manager
There was a problem hiding this comment.
Yes. Absolutely. That was my thought as well
frittentheke
left a comment
There was a problem hiding this comment.
Thanks a lot @ramaraochavali for tackling this issue!
|
@ramaraochavali I am looking at the release schedule (#58583) for 1.29 and am wondering if this fix here can still make it into 1.29 before the branch is cut? We are really looking forward to being able to use the overload manager in this regard (being able to actively react to approaching or reaching the connection limit). Is there any way to tag this as a 1.29 candidate? |
|
@istio/wg-networking-maintainers Can you PTAL? |
|
@istio/wg-networking-maintainers gentle ping. Would like to see if we can get this in to 1.29? |
|
@istio/wg-networking-maintainers can you PTAL when you get chance? Would like to get it to 1.29 |
| globalDownstreamMaxConnections := math.MaxInt32 | ||
| // If proxy metadata is set, use it to set the global downstream max connections. | ||
| // If not set, use the default value of max connections. | ||
| // TODO: Consider moving this to proxy config A |
There was a problem hiding this comment.
Isn't this already in proxyconfig?
| "overload_manager": { | ||
| "resource_monitors": [ | ||
| { | ||
| "name": "envoy.resource_monitors.global_downstream_max_connections", | ||
| "typed_config": { | ||
| "@type": "type.googleapis.com/envoy.extensions.resource_monitors.downstream_connections.v3.DownstreamConnectionsConfig", | ||
| "max_active_downstream_connections": 2147483647 | ||
| "max_active_downstream_connections": {{.global_downstream_max_connections}} |
There was a problem hiding this comment.
istio/api#3627 may be an interesting place to align efforts on a proper API for overload manager
|
/test integ-cni |
|
In response to a cherrypick label: new pull request created: #58900 |
Fixes #58594
This PR introduces a new proxy metadata
ISTIO_META_GLOBAL_DOWNSTREAM_MAX_CONNECTIONSthat will be used to specify the resource monitor value. Ifoverload.global_downstream_max_connectionsruntime flag, it will be set as global downstream connections resource monitor value. This is supported for BC reasons. It will still result in envoy deprecated warnings. If both are specified,ISTIO_META_GLOBAL_DOWNSTREAM_MAX_CONNECTIONSwill used for resource monitor value. Envoy also gives preference to resource monitor value when both resource monitor and runtime flag are specified.