Skip to content

Add spire-agent workload API rate limiting by pod UID with OS UID fal…#6724

Open
terahertz5k wants to merge 2 commits intospiffe:mainfrom
terahertz5k:spire-agent-rate-limiting
Open

Add spire-agent workload API rate limiting by pod UID with OS UID fal…#6724
terahertz5k wants to merge 2 commits intospiffe:mainfrom
terahertz5k:spire-agent-rate-limiting

Conversation

@terahertz5k
Copy link
Copy Markdown

…lback

Pull Request check list

  • Commit conforms to CONTRIBUTING.md?
  • Proper tests/regressions included?
  • Documentation updated?

Affected functionality
SPIRE agent Workload API — adds optional per-caller rate limiting for all 5 Workload API methods.

Description of change
Adds an optional ratelimit configuration block to the SPIRE agent that enforces per-caller token-bucket rate limits on Workload API methods (FetchX509SVID, FetchX509Bundles, FetchJWTSVID, FetchJWTBundles, ValidateJWTSVID). When a caller exhausts its burst, the agent returns ResourceExhausted immediately. All limits default to 0 (disabled), so existing deployments are unaffected.

Key design decisions:

  • Pod UID key with OS UID fallback — On Linux with Kubernetes, the rate limit key is the pod UID from the cgroup path, preventing all uid=0 pods from sharing one bucket. Falls back to OS UID on non-Kubernetes or non-Linux systems.
  • Non-blocking rejection (AllowN) — Unlike the server-side perIPLimiter which uses blocking WaitN, the Workload API fails fast with ResourceExhausted to avoid silently queuing goroutines during reconnection storms.
  • Two-generation GC — Same pattern as perIPLimiter: O(active-callers) memory, no background goroutine, automatic eviction after ~2 minutes of inactivity.
  • Metric-only observability — Rejections emit a workload_api.rate_limit_exceeded counter with method and key_type labels. Per-rejection logging was intentionally omitted to avoid logrus.Entry allocation pressure under heavy load.
  • Middleware integration — Implemented as middleware.Middleware, inserted last in the chain (after verifySecurityHeader). No changes to existing handler code.

Configuration example:

agent {
  ratelimit {
    fetch_jwt_svid    = 20
    fetch_x509_svid   = 20
  }
}

Testing:

  • Unit tests: token bucket mechanics, GC, pod UID resolution, OS UID fallback, metrics labels, concurrency with race detector
  • Integration test through full gRPC stack
  • Load tested in a local Kubernetes cluster with 800 concurrent workers — agent stays stable with no OOM

Which issue this PR fixes
fixes #2010

…lback

Signed-off-by: Kevin Lui <kevin.lui@thetradedesk.com>
@sorindumitru
Copy link
Copy Markdown
Collaborator

Hi @terahertz5k, thanks for opening a PR for this! It's something missing from the agent that we'd definitely want improved.

We were wondering if you can give some details about the particular scenario that you ran into that this PR helped you.

Comment on lines +39 to +55
// perCallerRateLimiter maintains per-caller rate limiters using a two-generation GC pattern.
// The key is a string that may represent a pod UID or OS UID (with a prefix to avoid collisions).
type perCallerRateLimiter struct {
limit int

mtx sync.RWMutex

// previous holds all the limiters that were current at the GC.
previous map[string]*rate.Limiter

// current holds all the limiters that have been created or moved
// from the previous limiters since the last GC.
current map[string]*rate.Limiter

// lastGC is the time of the last GC.
lastGC time.Time
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot in common between this structure and the perIPLimiter in the server. Is is possible to factor out some of the common code into pkg/common/ratelimit so that the agent and the server can share the code?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call.

I can extract a shared PerKeyLimiter into pkg/common/ratelimit with a Limiter interface that covers both AllowN and WaitN. The two-generation GC map logic moves there, and both the agent's perCallerRateLimiter and server's perIPLimiter become thin wrappers that just handle key extraction.

Today, the agent stores *rate.Limiter directly while the server stores its own rawRateLimiter interface (so tests can inject fakes). One shared Limiter interface replaces rawRateLimiter and works for both cases: the agent calls AllowN on it, the server calls WaitN, and the server's test fakes just need a no-op AllowN stub added.

// OS UID prefixed with "uid:".
func (m workloadRateLimitMiddleware) resolveRateLimitKey(caller peertracker.CallerInfo) string {
if m.resolver != nil {
if podUID := m.resolver.GetPodUID(caller.PID); podUID != "" {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This operation can be expensive and will add some RPC time even in cases where spire is not running on Kubernetes but has rate limits configured. We would need to find a way to address this where we don't do unnecessary work on nodes that don't need it.

It would be good to understand the scenario you ran into that this change helped with. We can see how can change this to handle that and also not affect non-k8s users.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, the procfs reads on every RPC are unnecessary on non-k8s nodes.

Thinking that I can add two things:

  1. Check KUBERNETES_SERVICE_HOST env var at resolver construction time. If unset, return a nil resolver so there's zero overhead on non-k8s nodes.
  2. On k8s nodes, we could wrap the resolver with a per-PID cache (short TTL ~10s) so we only hit procfs once per PID per interval, instead of on every RPC. Not sure if this is needed or not though.

I replied separately, but we were seeing agent memory increase due to badly behaving applications hammering the socket for certain operations. This would cause k8s to kill the agent due to OOM for the entire k8s node.

@terahertz5k
Copy link
Copy Markdown
Author

hi @sorindumitru ,

We were seeing badly behaving applications hammer the socket and cause memory usage on the agent to increase high enough for k8s to kill the spire-agent pod due to OOM.

We will be using this for x509 and jwt fetch.

I'll have a look at your other two comments and see what I can do there.

@sorindumitru
Copy link
Copy Markdown
Collaborator

We were seeing badly behaving applications hammer the socket and cause memory usage on the agent to increase high enough for k8s to kill the spire-agent pod due to OOM.

How were these applications misbehaving?

@terahertz5k
Copy link
Copy Markdown
Author

We were seeing badly behaving applications hammer the socket and cause memory usage on the agent to increase high enough for k8s to kill the spire-agent pod due to OOM.

How were these applications misbehaving?

We have an application that spins up multiple short-lived pods simultaneously, with processes forking within containers and creating a new connection to the agent socket on every retry rather than reusing existing ones. We worked with the team to fix their retry logic and raised agent memory limits, which stabilized things.

We didn't know it was this application at first, but we were able to see on our graphs that something was making lots of jwtfetch requests at certain times of the day as well as the number of agent pod restarts.

That said, we want to add this rate limiting as a safety net. It's valid for applications to have multiple identities and make fetch requests for each, but a single caller shouldn't be able to overwhelm the agent to the point of OOM. We'd rather shed excess load gracefully than let one misbehaving workload take down the agent for everyone on the node.

…ution

Extract the two-generation GC per-key rate limiter into
pkg/common/ratelimit so both the agent and server share the same
implementation. Add Kubernetes auto-detection to skip procfs reads on
non-k8s nodes and cache pod UID lookups per PID to reduce overhead.

Signed-off-by: Kevin Lui <kevin.lui@thetradedesk.com>
@sorindumitru
Copy link
Copy Markdown
Collaborator

Thanks @terahertz5k for the answers above. One issue we have with the current approach is that it's very specific to k8s (and a bit to unix). We're wondering if applying the ratelimit per workload SPIFFE ID might be a better approach. This has the benefit of being useful in non-k8s cases but also comes with some disadvantages:

  • The agent is spending a bit more time attesting the workloads, so you might not such a big benefit out of it.
  • The ratelimit could potentially apply to multiple workload replicas, which may be a good or a bad thing, but can also be tuned through the limits.

Do you think you might be able to apply the ratelimit after the attestation and see if it still works ok for the cases you ran into?

@Pittu-Sharma
Copy link
Copy Markdown

Hello, I would like to work on this issue.

@terahertz5k
Copy link
Copy Markdown
Author

@sorindumitru , thanks for the feedback.

The issue we hit was specifically that FetchJWTSVID is unary and every call triggers a fresh attestation. Moving the rate limit to after attestation would most likely undermine the primary goal of this change, which is to protect the agent from memory exhaustion. The concurrent attestation work is what drives the OOM, I think.

The pod UID resolution is the only k8s specific piece and it's only active on Linux nodes running k8s. On all other platforms, it falls back to OS UID. It's really an enhancement for cases (that are more likely in K8s) where container images may use the same OS UID.

To validate this, I'll move the rate limit after attestation and run my load tests. I'll report back with the results.

@terahertz5k
Copy link
Copy Markdown
Author

terahertz5k commented Mar 24, 2026

Based on my tests at 128MB and a 10 RPS rate limit, pre-attestation code survived 5 minutes of 4 pods with 100 forked workers each hammering FetchJWTSVID on a tight loop, but post-attestation code OOMKilled instantly.

Doing more tests at 256MB but with 4x300 workers, pre-attestation code peaked at 168 MB, but post-attestation code was OOMKilled. This shows there is a large delta as load increases between pre- and post- attestation rate limiting. Granted this load is pretty extreme for a single agent.

I increased memory to 512MB just to see what the actual memory peak was at 4x300 workers with 10 RPS rate limit.
Pre-Attestation: 138.4 MiB
Post-Attestation: 307.5 MiB
No rate limiting: 341.0 MiB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Hardening the Agent - Rate Limiting UDS

3 participants