Skip to content

fix global downstream max connections behaviour#58666

Merged
istio-testing merged 2 commits intoistio:masterfrom
ramaraochavali:fix/global_connections
Jan 16, 2026
Merged

fix global downstream max connections behaviour#58666
istio-testing merged 2 commits intoistio:masterfrom
ramaraochavali:fix/global_connections

Conversation

@ramaraochavali
Copy link
Contributor

@ramaraochavali ramaraochavali commented Jan 2, 2026

Fixes #58594

This PR introduces a new proxy metadata ISTIO_META_GLOBAL_DOWNSTREAM_MAX_CONNECTIONS that will be used to specify the resource monitor value. If overload.global_downstream_max_connections runtime flag, it will be set as global downstream connections resource monitor value. This is supported for BC reasons. It will still result in envoy deprecated warnings. If both are specified, ISTIO_META_GLOBAL_DOWNSTREAM_MAX_CONNECTIONS will used for resource monitor value. Envoy also gives preference to resource monitor value when both resource monitor and runtime flag are specified.

  • Ambient
  • Configuration Infrastructure
  • Docs
  • Dual Stack
  • Installation
  • Networking
  • Performance and Scalability
  • Extensions and Telemetry
  • Security
  • Test and Release
  • User Experience
  • Developer Infrastructure
  • Upgrade
  • Multi Cluster
  • Virtual Machine
  • Control Plane Revisions

Signed-off-by: Rama Chavali <rama.rao@salesforce.com>
@ramaraochavali ramaraochavali requested a review from a team as a code owner January 2, 2026 08:45
@istio-testing istio-testing added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jan 2, 2026
Signed-off-by: Rama Chavali <rama.rao@salesforce.com>
Comment on lines 748 to +754
"overload_manager": {
"resource_monitors": [
{
"name": "envoy.resource_monitors.global_downstream_max_connections",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.resource_monitors.downstream_connections.v3.DownstreamConnectionsConfig",
"max_active_downstream_connections": 2147483647
"max_active_downstream_connections": {{.global_downstream_max_connections}}
Copy link
Contributor

@frittentheke frittentheke Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The connection monitor is still always configured, even with the default value (
Maybe Istio should not add this monitor at all if there is "no limit" set? Setting MAXINT is pretty much just like having no limit at all.

In case one was to explicitly configure the overload manager config via custom bootstrap config (see #28302 -> #56149) there still is a collision. Currently there only is max_active_downstream_connections as parameter (https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/resource_monitors/downstream_connections/v3/downstream_connections.proto#envoy-v3-api-msg-extensions-resource-monitors-downstream-connections-v3-downstreamconnectionsconfig), so it's only about having a split overload config then (downstream_connections monitor set via downstream_connections and the rest via custom bootrap config, not pretty, but still functional.

But in case there were new parameters or different resource monitors for connection added to envoy, there is not way to completely remove this static block here and configure overload manager explicitly.

And having native exposure of the overload manager settings seems still be in high demand, see #14366.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even with the default value (
Maybe Istio should not add this monitor at all if there is "no limit" set? Setting MAXINT is pretty much just like having no limit at all.

We can not remove because the reason why we added this limit to get away from Envoy warning. So this PR tried to match the existing behaviour

. But in case there were new parameters or different resource monitors for connection added to envoy, there is not way to completely remove this static block here and configure overload manager explicitly.

And having native exposure of the overload manager settings seems still be in high demand, see #14366.

Yes. We need a proper API. I tried to fix the current issue as it is kind of regression without introducing API change. Once this is merged, I will work on proper API

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ramaraochavali ! You seem to be quite on top on things. Thanks for your time and effort on this one!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

istio/api#3627 may be an interesting place to align efforts on a proper API for overload manager

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Absolutely. That was my thought as well

Copy link
Contributor

@frittentheke frittentheke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @ramaraochavali for tackling this issue!

@frittentheke
Copy link
Contributor

@ramaraochavali I am looking at the release schedule (#58583) for 1.29 and am wondering if this fix here can still make it into 1.29 before the branch is cut?

We are really looking forward to being able to use the overload manager in this regard (being able to actively react to approaching or reaching the connection limit). Is there any way to tag this as a 1.29 candidate?

@ramaraochavali
Copy link
Contributor Author

@istio/wg-networking-maintainers Can you PTAL?

@ramaraochavali
Copy link
Contributor Author

@istio/wg-networking-maintainers gentle ping. Would like to see if we can get this in to 1.29?

@ramaraochavali
Copy link
Contributor Author

@istio/wg-networking-maintainers can you PTAL when you get chance? Would like to get it to 1.29

globalDownstreamMaxConnections := math.MaxInt32
// If proxy metadata is set, use it to set the global downstream max connections.
// If not set, use the default value of max connections.
// TODO: Consider moving this to proxy config A
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this already in proxyconfig?

Comment on lines 748 to +754
"overload_manager": {
"resource_monitors": [
{
"name": "envoy.resource_monitors.global_downstream_max_connections",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.resource_monitors.downstream_connections.v3.DownstreamConnectionsConfig",
"max_active_downstream_connections": 2147483647
"max_active_downstream_connections": {{.global_downstream_max_connections}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

istio/api#3627 may be an interesting place to align efforts on a proper API for overload manager

@ramaraochavali
Copy link
Contributor Author

/test integ-cni

@istio-testing istio-testing merged commit 3dd7ed8 into istio:master Jan 16, 2026
33 checks passed
@ramaraochavali ramaraochavali deleted the fix/global_connections branch January 16, 2026 05:44
@keithmattix keithmattix added the cherrypick/release-1.29 Set this label on a PR to auto-merge it to the release-1.29 branch label Jan 26, 2026
@istio-testing
Copy link
Collaborator

In response to a cherrypick label: new pull request created: #58900

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherrypick/release-1.29 Set this label on a PR to auto-merge it to the release-1.29 branch size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

4 participants