Add spire-agent workload API rate limiting by pod UID with OS UID fal…#6724
Add spire-agent workload API rate limiting by pod UID with OS UID fal…#6724terahertz5k wants to merge 2 commits intospiffe:mainfrom
Conversation
…lback Signed-off-by: Kevin Lui <kevin.lui@thetradedesk.com>
|
Hi @terahertz5k, thanks for opening a PR for this! It's something missing from the agent that we'd definitely want improved. We were wondering if you can give some details about the particular scenario that you ran into that this PR helped you. |
pkg/agent/endpoints/ratelimit.go
Outdated
| // perCallerRateLimiter maintains per-caller rate limiters using a two-generation GC pattern. | ||
| // The key is a string that may represent a pod UID or OS UID (with a prefix to avoid collisions). | ||
| type perCallerRateLimiter struct { | ||
| limit int | ||
|
|
||
| mtx sync.RWMutex | ||
|
|
||
| // previous holds all the limiters that were current at the GC. | ||
| previous map[string]*rate.Limiter | ||
|
|
||
| // current holds all the limiters that have been created or moved | ||
| // from the previous limiters since the last GC. | ||
| current map[string]*rate.Limiter | ||
|
|
||
| // lastGC is the time of the last GC. | ||
| lastGC time.Time | ||
| } |
There was a problem hiding this comment.
There's a lot in common between this structure and the perIPLimiter in the server. Is is possible to factor out some of the common code into pkg/common/ratelimit so that the agent and the server can share the code?
There was a problem hiding this comment.
Good call.
I can extract a shared PerKeyLimiter into pkg/common/ratelimit with a Limiter interface that covers both AllowN and WaitN. The two-generation GC map logic moves there, and both the agent's perCallerRateLimiter and server's perIPLimiter become thin wrappers that just handle key extraction.
Today, the agent stores *rate.Limiter directly while the server stores its own rawRateLimiter interface (so tests can inject fakes). One shared Limiter interface replaces rawRateLimiter and works for both cases: the agent calls AllowN on it, the server calls WaitN, and the server's test fakes just need a no-op AllowN stub added.
| // OS UID prefixed with "uid:". | ||
| func (m workloadRateLimitMiddleware) resolveRateLimitKey(caller peertracker.CallerInfo) string { | ||
| if m.resolver != nil { | ||
| if podUID := m.resolver.GetPodUID(caller.PID); podUID != "" { |
There was a problem hiding this comment.
This operation can be expensive and will add some RPC time even in cases where spire is not running on Kubernetes but has rate limits configured. We would need to find a way to address this where we don't do unnecessary work on nodes that don't need it.
It would be good to understand the scenario you ran into that this change helped with. We can see how can change this to handle that and also not affect non-k8s users.
There was a problem hiding this comment.
Agreed, the procfs reads on every RPC are unnecessary on non-k8s nodes.
Thinking that I can add two things:
- Check KUBERNETES_SERVICE_HOST env var at resolver construction time. If unset, return a nil resolver so there's zero overhead on non-k8s nodes.
- On k8s nodes, we could wrap the resolver with a per-PID cache (short TTL ~10s) so we only hit procfs once per PID per interval, instead of on every RPC. Not sure if this is needed or not though.
I replied separately, but we were seeing agent memory increase due to badly behaving applications hammering the socket for certain operations. This would cause k8s to kill the agent due to OOM for the entire k8s node.
|
hi @sorindumitru , We were seeing badly behaving applications hammer the socket and cause memory usage on the agent to increase high enough for k8s to kill the spire-agent pod due to OOM. We will be using this for x509 and jwt fetch. I'll have a look at your other two comments and see what I can do there. |
How were these applications misbehaving? |
We have an application that spins up multiple short-lived pods simultaneously, with processes forking within containers and creating a new connection to the agent socket on every retry rather than reusing existing ones. We worked with the team to fix their retry logic and raised agent memory limits, which stabilized things. We didn't know it was this application at first, but we were able to see on our graphs that something was making lots of jwtfetch requests at certain times of the day as well as the number of agent pod restarts. That said, we want to add this rate limiting as a safety net. It's valid for applications to have multiple identities and make fetch requests for each, but a single caller shouldn't be able to overwhelm the agent to the point of OOM. We'd rather shed excess load gracefully than let one misbehaving workload take down the agent for everyone on the node. |
…ution Extract the two-generation GC per-key rate limiter into pkg/common/ratelimit so both the agent and server share the same implementation. Add Kubernetes auto-detection to skip procfs reads on non-k8s nodes and cache pod UID lookups per PID to reduce overhead. Signed-off-by: Kevin Lui <kevin.lui@thetradedesk.com>
|
Thanks @terahertz5k for the answers above. One issue we have with the current approach is that it's very specific to k8s (and a bit to unix). We're wondering if applying the ratelimit per workload SPIFFE ID might be a better approach. This has the benefit of being useful in non-k8s cases but also comes with some disadvantages:
Do you think you might be able to apply the ratelimit after the attestation and see if it still works ok for the cases you ran into? |
|
Hello, I would like to work on this issue. |
|
@sorindumitru , thanks for the feedback. The issue we hit was specifically that FetchJWTSVID is unary and every call triggers a fresh attestation. Moving the rate limit to after attestation would most likely undermine the primary goal of this change, which is to protect the agent from memory exhaustion. The concurrent attestation work is what drives the OOM, I think. The pod UID resolution is the only k8s specific piece and it's only active on Linux nodes running k8s. On all other platforms, it falls back to OS UID. It's really an enhancement for cases (that are more likely in K8s) where container images may use the same OS UID. To validate this, I'll move the rate limit after attestation and run my load tests. I'll report back with the results. |
|
Based on my tests at 128MB and a 10 RPS rate limit, pre-attestation code survived 5 minutes of 4 pods with 100 forked workers each hammering FetchJWTSVID on a tight loop, but post-attestation code OOMKilled instantly. Doing more tests at 256MB but with 4x300 workers, pre-attestation code peaked at 168 MB, but post-attestation code was OOMKilled. This shows there is a large delta as load increases between pre- and post- attestation rate limiting. Granted this load is pretty extreme for a single agent. I increased memory to 512MB just to see what the actual memory peak was at 4x300 workers with 10 RPS rate limit. |
…lback
Pull Request check list
Affected functionality
SPIRE agent Workload API — adds optional per-caller rate limiting for all 5 Workload API methods.
Description of change
Adds an optional
ratelimitconfiguration block to the SPIRE agent that enforces per-caller token-bucket rate limits on Workload API methods (FetchX509SVID,FetchX509Bundles,FetchJWTSVID,FetchJWTBundles,ValidateJWTSVID). When a caller exhausts its burst, the agent returnsResourceExhaustedimmediately. All limits default to 0 (disabled), so existing deployments are unaffected.Key design decisions:
uid=0pods from sharing one bucket. Falls back to OS UID on non-Kubernetes or non-Linux systems.AllowN) — Unlike the server-sideperIPLimiterwhich uses blockingWaitN, the Workload API fails fast withResourceExhaustedto avoid silently queuing goroutines during reconnection storms.perIPLimiter: O(active-callers) memory, no background goroutine, automatic eviction after ~2 minutes of inactivity.workload_api.rate_limit_exceededcounter withmethodandkey_typelabels. Per-rejection logging was intentionally omitted to avoidlogrus.Entryallocation pressure under heavy load.middleware.Middleware, inserted last in the chain (afterverifySecurityHeader). No changes to existing handler code.Configuration example:
Testing:
Which issue this PR fixes
fixes #2010