fix: chunk fetchStatusByIds and updateAllUsersSince requests (MM-68226)#9669
fix: chunk fetchStatusByIds and updateAllUsersSince requests (MM-68226)#9669ewwollesen wants to merge 3 commits intomainfrom
Conversation
… 413 errors On servers with large teams (thousands of members), the mobile client sends all user IDs in a single POST body for status and profile-by-ID requests. This exceeds the server's MaximumPayloadSizeBytes limit (default 300KB, often configured lower), causing "Request body too large" errors on every app launch and websocket reconnect. Chunk these requests into batches (200 IDs for statuses, 100 for profiles) matching the web client's batching strategy. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@ewwollesen: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsI understand the commands that are listed here |
📝 WalkthroughWalkthroughTwo remote action functions were updated to split large ID lists into configured chunk sizes and fetch each chunk concurrently using Changes
Sequence Diagram(s)sequenceDiagram
participant RemoteAction as RemoteAction (user.ts)
participant Config as Config (general.ts)
participant Client as Client API
participant Store as Persistence
RemoteAction->>Config: read MAX_IDS_PER_* constants
RemoteAction->>RemoteAction: split userIds into batches
RemoteAction->>Client: send concurrent requests for each batch (allSettled)
Client-->>RemoteAction: return settled results (fulfilled/rejected)
RemoteAction->>RemoteAction: filter fulfilled, flatten results
RemoteAction->>Store: continue with existing batching/persistence
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Coverage Comparison ReportGenerated on April 08, 2026 at 18:41:53 UTC |
Rename `ids` to `batchIds` in chunk map callbacks to avoid shadowing the module-level `ids` variable used by `debouncedFetchStatusesByIds`. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… all results Per review feedback — if one batch request fails, the remaining successful batches are still used instead of throwing away everything. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Problem
On large teams
fetchStatusByIdsandupdateAllUsersSincesend all user IDs in a singlePOSTbody. This can exceed the server'sMaximumPayloadSizeBytes. This can cause the app to return "Request body too large" on every app launch and WebSocket reconnect, breaking team switching entirely.Solution
Fix (2 files, +8 / -2 lines)
app/constants/general.ts— AddedMAX_IDS_PER_STATUS_REQUEST: 200andMAX_IDS_PER_PROFILES_REQUEST: 100(matching web client batch sizes)app/actions/remote/user.ts:-
fetchStatusByIds(line 366-368): Chunks userIds into batches of 200 before callingclient.getStatusesByIds(), runs in parallel viaPromise.all, flattens results-
updateAllUsersSince(line 658-660): Chunks userIds into batches of 100 before callingclient.getProfilesByIds(), same parallel + flatten patternThe chunk utility from
lodashwas already imported. Existing tests pass unchanged since they use small ID arrays (single chunk).