perf(hooks): add subscription-based event filtering to reduce hot-path dispatch overhead#1
perf(hooks): add subscription-based event filtering to reduce hot-path dispatch overhead#1coleleavitt wants to merge 2 commits intodevfrom
Conversation
|
|
Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔 |
📝 WalkthroughWalkthroughThe event dispatch in Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
|
All contributors have signed the CLA. Thank you! ✅ |
Review Summary by QodoPerformance optimization, Gemini model support, and comprehensive code refactoring
WalkthroughsDescription**Performance optimization and comprehensive refactoring:** • Implement subscription-based hook event filtering in event.ts to reduce hot-path dispatch overhead during LLM streaming (~2,100/sec → ~200/sec) • Add HOOK_SUBSCRIPTIONS mapping declaring which event types each of 21 hooks cares about • Introduce AWAITED_HOOKS set (3 critical hooks) that must be awaited; 18 others fire-and-forget with error handling • Refactor dispatchToHooks to conditionally await or fire-and-forget based on hook criticality **Model support and agent enhancements:** • Add Gemini model routing to Atlas and Hephaestus agents with optimized system prompts • Create Gemini-optimized prompts for Prometheus, Atlas, and ultrawork mode keyword detector • Update Gemini model references from gemini-3-pro to gemini-3.1-pro across test suites and migrations • Export Gemini Sisyphus Junior prompt builder **Code organization and maintainability:** • Extract utility functions from background-agent manager into dedicated modules (error-classifier, fallback-retry-handler, process-cleanup, session-idle-event-handler, task-poller, duration-formatter, compaction-aware-message-resolver) • Modularize edit operations with extracted utility functions (edit-deduplication, edit-ordering, edit-operation-primitives) • Refactor edit operations API: unified op field replaces type; operation names standardized (insert_before → prepend, insert_after → append, set_line → replace) **Testing improvements:** • Add 25-case comprehensive edit_file stress test suite with edge cases and whitespace handling • Add 21-operation edit_file test suite with validation and colored output • Add multi-model benchmark test runner for cross-model validation • Refactor and simplify test suites: think-mode hooks (452 → 155 lines), background update checker, directory injectors, edit operations • Add new error classifier utility test suite with comprehensive edge case coverage • Update all test cases to reflect new API schemas and model naming conventions **Configuration and tooling:** • Add optional threads property to GlobOptions for ripgrep thread count limiting • Add pendingNotifications map to queue notifications when parent session is aborted • Add isModelFallbackEnabled flag to guard model fallback logic • Add deleteSessionTools cleanup call in session deletion handler Diagramflowchart LR
A["Event Dispatcher<br/>2100 ops/sec"] -->|"Subscription Filter"| B["Hook Subscriptions<br/>19 skipped"]
B -->|"Awaited"| C["3 Critical Hooks<br/>claudeCodeHooks<br/>stopContinuationGuard<br/>writeExistingFileGuard"]
B -->|"Fire-and-Forget"| D["18 Other Hooks<br/>with error handling"]
C -->|"Result"| E["Optimized Dispatch<br/>200 ops/sec"]
D -->|"Result"| E
F["Gemini Models"] -->|"Route"| G["Atlas Agent"]
F -->|"Route"| H["Prometheus Agent"]
F -->|"Route"| I["Ultrawork Mode"]
J["Edit Operations"] -->|"Modularize"| K["Deduplication<br/>Ordering<br/>Primitives"]
K -->|"Unified API"| L["op field<br/>prepend/append/replace"]
File Changes1. benchmarks/test-edge-cases.ts
|
Code Review by Qodo
1. Given/when/then uses comments
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the performance and robustness of the system by introducing a subscription-based event dispatching mechanism for hooks, which drastically reduces overhead during high-frequency operations. It also addresses critical configuration management issues, improves platform binary compatibility, and expands agent capabilities with optimized support for Gemini models. Additionally, several agent-specific behaviors and user interaction flows have been refined, including better handling of background tasks, Git worktrees, and user notifications. Highlights
Changelog
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
12 issues found across 314 files
Note: This PR contains a large number of files. cubic only reviews up to 75 files per PR, so some files may not have been reviewed.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/features/background-agent/fallback-retry-handler.ts">
<violation number="1" location="src/features/background-agent/fallback-retry-handler.ts:106">
P3: Dead code: `task.agent` fallback is unreachable since `task.model` was unconditionally assigned a few lines above. The ternary is always true. Consider simplifying to just the template literal.</violation>
</file>
<file name="src/cli/config-manager/write-omo-config.ts">
<violation number="1" location="src/cli/config-manager/write-omo-config.ts:46">
P1: Swapped argument order reverses merge priority: existing on-disk config now silently overrides freshly generated install selections. Previously `newConfig` (user's new choices) took precedence over `existing` (on-disk); now `existing` always wins on conflicting keys. If a user re-runs `install` to change settings, their new selections won't take effect for any key already present in the config file.
If the intent is to preserve user customizations while adding new keys, consider a more explicit strategy (e.g., only merge keys that don't exist yet), and document this behavior. If the intent is for new selections to win, restore the original argument order.</violation>
</file>
<file name=".issue-comment-2064.md">
<violation number="1" location=".issue-comment-2064.md:11">
P2: This issue comment contains factually incorrect information. It claims `write-omo-config.ts` line 46 uses `deepMergeRecord(existing, newConfig)`, but the actual source already uses `deepMergeRecord(newConfig, existing)` — the exact order this document recommends as the "fix". The described bug does not exist in the current codebase. This document should either be corrected or removed to avoid misleading developers.</violation>
</file>
<file name="src/cli/run/poll-for-completion.ts">
<violation number="1" location="src/cli/run/poll-for-completion.ts:134">
P1: Missing error handling for API calls in the secondary timeout block. If `session.children()` or `session.todo()` throws, the entire polling loop crashes. Other API calls in this file (e.g., `getMainSessionStatus`) use try/catch — this block should too.</violation>
<violation number="2" location="src/cli/run/poll-for-completion.ts:145">
P2: `hasActiveChildren` counts all children regardless of status, unlike `hasActiveTodos` which filters by status. A session with only idle/completed children will falsely trigger `hasReceivedMeaningfulWork = true`. Consider checking child session statuses (as done in `completion.ts`'s `areAllDescendantsIdle`).</violation>
</file>
<file name="src/features/background-agent/compaction-aware-message-resolver.ts">
<violation number="1" location="src/features/background-agent/compaction-aware-message-resolver.ts:16">
P1: Bug: `hasPartialAgentOrModel` can still match compaction agent messages. When a message has `agent: "compaction"` but valid model fields, `hasModel` is `true` and the OR returns `true`, allowing a compaction message to leak through the fallback pass. The function should reject compaction agents outright before checking partial fields.</violation>
</file>
<file name="src/cli/config-manager/antigravity-provider-configuration.ts">
<violation number="1" location="src/cli/config-manager/antigravity-provider-configuration.ts:19">
P2: Stale documentation: `docs/guide/installation.md` still references the old `antigravity-gemini-3-pro` model name in 3 places. These should be updated to `antigravity-gemini-3.1-pro` to match this rename.</violation>
<violation number="2" location="src/cli/config-manager/antigravity-provider-configuration.ts:19">
P2: Display name not updated to match model version bump: key says `3.1` but `name` still says `Gemini 3 Pro`. Should be `"Gemini 3.1 Pro (Antigravity)"`.</violation>
</file>
<file name=".github/workflows/publish-platform.yml">
<violation number="1" location=".github/workflows/publish-platform.yml:203">
P1: Using `if: always() && !cancelled()` on the publish job means it runs even when core platform builds fail. Combined with `continue-on-error: true` on the download step, a failure in any critical platform (e.g., `darwin-arm64`, `linux-x64`) will be silently swallowed — the workflow can succeed while that platform's binary is missing from npm. Consider either: (a) separating baseline builds into a separate job with `continue-on-error` so core builds remain required, or (b) adding a validation step that checks `needs.build.result == 'success'` for non-baseline platforms before publishing.</violation>
</file>
<file name="src/features/background-agent/process-cleanup.ts">
<violation number="1" location="src/features/background-agent/process-cleanup.ts:35">
P1: `cleanupAll()` can be called multiple times (signal→exit, beforeExit→exit, or rapid Ctrl+C), causing `shutdown()` to run repeatedly on each manager. Add a re-entrancy guard to ensure cleanup executes only once.</violation>
</file>
<file name="src/features/background-agent/error-classifier.ts">
<violation number="1" location="src/features/background-agent/error-classifier.ts:1">
P2: Duplicate of `src/shared/record-type-guard.ts`. Several modules already import the shared `isRecord`—reuse it here instead of adding another copy.</violation>
</file>
<file name="benchmarks/headless.ts">
<violation number="1" location="benchmarks/headless.ts:32">
P1: Missing closing `}` for the `for await` loop. The `}` on this line closes the `switch`, but there is no subsequent `}` to close the `for await (const part of stream.fullStream)` loop. As a result, `await stream.response`, `messages.push(...)`, the `finishReason` check, and the `break` statement all execute **inside** the for-await loop rather than after it. This causes: (1) `stream.response` blocks on the first part so remaining parts never go through the switch, (2) the `break` exits the for-await loop, not the step loop, so the agent loop keeps running when it should stop.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| } | ||
|
|
||
| const merged = deepMergeRecord(existing, newConfig) | ||
| const merged = deepMergeRecord(newConfig, existing) |
There was a problem hiding this comment.
P1: Swapped argument order reverses merge priority: existing on-disk config now silently overrides freshly generated install selections. Previously newConfig (user's new choices) took precedence over existing (on-disk); now existing always wins on conflicting keys. If a user re-runs install to change settings, their new selections won't take effect for any key already present in the config file.
If the intent is to preserve user customizations while adding new keys, consider a more explicit strategy (e.g., only merge keys that don't exist yet), and document this behavior. If the intent is for new selections to win, restore the original argument order.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/cli/config-manager/write-omo-config.ts, line 46:
<comment>Swapped argument order reverses merge priority: existing on-disk config now silently overrides freshly generated install selections. Previously `newConfig` (user's new choices) took precedence over `existing` (on-disk); now `existing` always wins on conflicting keys. If a user re-runs `install` to change settings, their new selections won't take effect for any key already present in the config file.
If the intent is to preserve user customizations while adding new keys, consider a more explicit strategy (e.g., only merge keys that don't exist yet), and document this behavior. If the intent is for new selections to win, restore the original argument order.</comment>
<file context>
@@ -43,7 +43,7 @@ export function writeOmoConfig(installConfig: InstallConfig): ConfigMergeResult
}
- const merged = deepMergeRecord(existing, newConfig)
+ const merged = deepMergeRecord(newConfig, existing)
writeFileSync(omoConfigPath, JSON.stringify(merged, null, 2) + "\n")
} catch (parseErr) {
</file context>
| ) { | ||
| secondaryTimeoutChecked = true | ||
| // Check if session actually has pending work (children, todos, etc.) | ||
| const childrenRes = await ctx.client.session.children({ |
There was a problem hiding this comment.
P1: Missing error handling for API calls in the secondary timeout block. If session.children() or session.todo() throws, the entire polling loop crashes. Other API calls in this file (e.g., getMainSessionStatus) use try/catch — this block should too.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/cli/run/poll-for-completion.ts, line 134:
<comment>Missing error handling for API calls in the secondary timeout block. If `session.children()` or `session.todo()` throws, the entire polling loop crashes. Other API calls in this file (e.g., `getMainSessionStatus`) use try/catch — this block should too.</comment>
<file context>
@@ -81,6 +121,50 @@ export async function pollForCompletion(
+ ) {
+ secondaryTimeoutChecked = true
+ // Check if session actually has pending work (children, todos, etc.)
+ const childrenRes = await ctx.client.session.children({
+ path: { id: ctx.sessionID },
+ query: { directory: ctx.directory },
</file context>
| !!message.model?.modelID | ||
| } | ||
|
|
||
| function hasPartialAgentOrModel(message: StoredMessage): boolean { |
There was a problem hiding this comment.
P1: Bug: hasPartialAgentOrModel can still match compaction agent messages. When a message has agent: "compaction" but valid model fields, hasModel is true and the OR returns true, allowing a compaction message to leak through the fallback pass. The function should reject compaction agents outright before checking partial fields.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/features/background-agent/compaction-aware-message-resolver.ts, line 16:
<comment>Bug: `hasPartialAgentOrModel` can still match compaction agent messages. When a message has `agent: "compaction"` but valid model fields, `hasModel` is `true` and the OR returns `true`, allowing a compaction message to leak through the fallback pass. The function should reject compaction agents outright before checking partial fields.</comment>
<file context>
@@ -0,0 +1,57 @@
+ !!message.model?.modelID
+}
+
+function hasPartialAgentOrModel(message: StoredMessage): boolean {
+ const hasAgent = !!message.agent && !isCompactionAgent(message.agent)
+ const hasModel = !!message.model?.providerID && !!message.model?.modelID
</file context>
| # ============================================================================= | ||
| publish: | ||
| needs: build | ||
| if: always() && !cancelled() |
There was a problem hiding this comment.
P1: Using if: always() && !cancelled() on the publish job means it runs even when core platform builds fail. Combined with continue-on-error: true on the download step, a failure in any critical platform (e.g., darwin-arm64, linux-x64) will be silently swallowed — the workflow can succeed while that platform's binary is missing from npm. Consider either: (a) separating baseline builds into a separate job with continue-on-error so core builds remain required, or (b) adding a validation step that checks needs.build.result == 'success' for non-baseline platforms before publishing.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/publish-platform.yml, line 203:
<comment>Using `if: always() && !cancelled()` on the publish job means it runs even when core platform builds fail. Combined with `continue-on-error: true` on the download step, a failure in any critical platform (e.g., `darwin-arm64`, `linux-x64`) will be silently swallowed — the workflow can succeed while that platform's binary is missing from npm. Consider either: (a) separating baseline builds into a separate job with `continue-on-error` so core builds remain required, or (b) adding a validation step that checks `needs.build.result == 'success'` for non-baseline platforms before publishing.</comment>
<file context>
@@ -150,12 +200,13 @@ jobs:
# =============================================================================
publish:
needs: build
+ if: always() && !cancelled()
runs-on: ubuntu-latest
strategy:
</file context>
| if (cleanupRegistered) return | ||
| cleanupRegistered = true | ||
|
|
||
| const cleanupAll = () => { |
There was a problem hiding this comment.
P1: cleanupAll() can be called multiple times (signal→exit, beforeExit→exit, or rapid Ctrl+C), causing shutdown() to run repeatedly on each manager. Add a re-entrancy guard to ensure cleanup executes only once.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/features/background-agent/process-cleanup.ts, line 35:
<comment>`cleanupAll()` can be called multiple times (signal→exit, beforeExit→exit, or rapid Ctrl+C), causing `shutdown()` to run repeatedly on each manager. Add a re-entrancy guard to ensure cleanup executes only once.</comment>
<file context>
@@ -0,0 +1,81 @@
+ if (cleanupRegistered) return
+ cleanupRegistered = true
+
+ const cleanupAll = () => {
+ for (const m of cleanupManagers) {
+ try {
</file context>
| }) | ||
| const todos = normalizeSDKResponse(todosRes, [] as unknown[]) | ||
|
|
||
| const hasActiveChildren = |
There was a problem hiding this comment.
P2: hasActiveChildren counts all children regardless of status, unlike hasActiveTodos which filters by status. A session with only idle/completed children will falsely trigger hasReceivedMeaningfulWork = true. Consider checking child session statuses (as done in completion.ts's areAllDescendantsIdle).
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/cli/run/poll-for-completion.ts, line 145:
<comment>`hasActiveChildren` counts all children regardless of status, unlike `hasActiveTodos` which filters by status. A session with only idle/completed children will falsely trigger `hasReceivedMeaningfulWork = true`. Consider checking child session statuses (as done in `completion.ts`'s `areAllDescendantsIdle`).</comment>
<file context>
@@ -81,6 +121,50 @@ export async function pollForCompletion(
+ })
+ const todos = normalizeSDKResponse(todosRes, [] as unknown[])
+
+ const hasActiveChildren =
+ Array.isArray(children) && children.length > 0
+ const hasActiveTodos =
</file context>
| /** | ||
| * Antigravity Provider Configuration | ||
| * | ||
| * IMPORTANT: Model names MUST use `antigravity-` prefix for stability. |
There was a problem hiding this comment.
P2: Stale documentation: docs/guide/installation.md still references the old antigravity-gemini-3-pro model name in 3 places. These should be updated to antigravity-gemini-3.1-pro to match this rename.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/cli/config-manager/antigravity-provider-configuration.ts, line 19:
<comment>Stale documentation: `docs/guide/installation.md` still references the old `antigravity-gemini-3-pro` model name in 3 places. These should be updated to `antigravity-gemini-3.1-pro` to match this rename.</comment>
<file context>
@@ -16,7 +16,7 @@ export const ANTIGRAVITY_PROVIDER_CONFIG = {
name: "Google",
models: {
- "antigravity-gemini-3-pro": {
+ "antigravity-gemini-3.1-pro": {
name: "Gemini 3 Pro (Antigravity)",
limit: { context: 1048576, output: 65535 },
</file context>
| name: "Google", | ||
| models: { | ||
| "antigravity-gemini-3-pro": { | ||
| "antigravity-gemini-3.1-pro": { |
There was a problem hiding this comment.
P2: Display name not updated to match model version bump: key says 3.1 but name still says Gemini 3 Pro. Should be "Gemini 3.1 Pro (Antigravity)".
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/cli/config-manager/antigravity-provider-configuration.ts, line 19:
<comment>Display name not updated to match model version bump: key says `3.1` but `name` still says `Gemini 3 Pro`. Should be `"Gemini 3.1 Pro (Antigravity)"`.</comment>
<file context>
@@ -16,7 +16,7 @@ export const ANTIGRAVITY_PROVIDER_CONFIG = {
name: "Google",
models: {
- "antigravity-gemini-3-pro": {
+ "antigravity-gemini-3.1-pro": {
name: "Gemini 3 Pro (Antigravity)",
limit: { context: 1048576, output: 65535 },
</file context>
| @@ -1,3 +1,7 @@ | |||
| export function isRecord(value: unknown): value is Record<string, unknown> { | |||
There was a problem hiding this comment.
P2: Duplicate of src/shared/record-type-guard.ts. Several modules already import the shared isRecord—reuse it here instead of adding another copy.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/features/background-agent/error-classifier.ts, line 1:
<comment>Duplicate of `src/shared/record-type-guard.ts`. Several modules already import the shared `isRecord`—reuse it here instead of adding another copy.</comment>
<file context>
@@ -1,3 +1,7 @@
+export function isRecord(value: unknown): value is Record<string, unknown> {
+ return typeof value === "object" && value !== null
+}
</file context>
| task.queuedAt = new Date() | ||
| task.error = undefined | ||
|
|
||
| const key = task.model ? `${task.model.providerID}/${task.model.modelID}` : task.agent |
There was a problem hiding this comment.
P3: Dead code: task.agent fallback is unreachable since task.model was unconditionally assigned a few lines above. The ternary is always true. Consider simplifying to just the template literal.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/features/background-agent/fallback-retry-handler.ts, line 106:
<comment>Dead code: `task.agent` fallback is unreachable since `task.model` was unconditionally assigned a few lines above. The ternary is always true. Consider simplifying to just the template literal.</comment>
<file context>
@@ -0,0 +1,126 @@
+ task.queuedAt = new Date()
+ task.error = undefined
+
+ const key = task.model ? `${task.model.providerID}/${task.model.modelID}` : task.agent
+ const queue = queuesByKey.get(key) ?? []
+ const retryInput: LaunchInput = {
</file context>
| describe("normalizeHashlineEdits", () => { | ||
| it("maps replace with pos to replace", () => { | ||
| //#given | ||
| const input: RawHashlineEdit[] = [{ op: "replace", pos: "2#VK", lines: "updated" }] | ||
|
|
||
| //#when | ||
| const result = normalizeHashlineEdits(input) | ||
|
|
||
| //#then | ||
| expect(result).toEqual([{ op: "replace", pos: "2#VK", lines: "updated" }]) | ||
| }) | ||
|
|
||
| it("maps replace with pos and end to replace", () => { | ||
| //#given | ||
| const input: RawHashlineEdit[] = [{ op: "replace", pos: "2#VK", end: "4#MB", lines: ["a", "b"] }] | ||
|
|
There was a problem hiding this comment.
1. Given/when/then uses comments 📘 Rule violation ✓ Correctness
The new normalize-edits.test.ts uses //#given/#when/#then comments instead of the required nested describe blocks. This violates the required test structure convention.
Agent Prompt
## Issue description
Tests are using `//#given/#when/#then` comment markers instead of nested `describe` blocks with `#given/#when/#then` prefixes.
## Issue Context
The test style requirement is explicit about using nested `describe` blocks for given/when/then organization.
## Fix Focus Areas
- src/tools/hashline-edit/normalize-edits.test.ts[4-61]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| function createMockClient() { | ||
| return { | ||
| session: { | ||
| abort: mock(async () => ({})), | ||
| }, | ||
| } as any | ||
| } | ||
|
|
||
| function createDefaultArgs(taskOverrides: Partial<BackgroundTask> = {}) { | ||
| const processKeyFn = mock(() => {}) | ||
| const queuesByKey = new Map<string, Array<{ task: BackgroundTask; input: any }>>() | ||
| const idleDeferralTimers = new Map<string, ReturnType<typeof setTimeout>>() | ||
| const concurrencyManager = createMockConcurrencyManager() |
There was a problem hiding this comment.
2. as any in new test 📘 Rule violation ✓ Correctness
The new fallback-retry-handler.test.ts introduces as any casts, which suppress type safety. This violates the prohibition on type/lint suppression mechanisms in modified code.
Agent Prompt
## Issue description
New tests introduce `as any` casts, which suppress type safety and are explicitly disallowed.
## Issue Context
These casts are used to shape mock objects and to access mock methods on imported functions.
## Fix Focus Areas
- src/features/background-agent/fallback-retry-handler.test.ts[54-66]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| [sisyphus-bot] | ||
|
|
||
| ## Confirmed Bug |
There was a problem hiding this comment.
3. .issue-comment-2064.md naming 📘 Rule violation ✓ Correctness
A new file .issue-comment-2064.md was added with a leading dot, which is not kebab-case. This violates the kebab-case-only requirement for new file names.
Agent Prompt
## Issue description
A newly added file name uses a leading dot and is not kebab-case.
## Issue Context
Compliance requires kebab-case for all newly added or renamed files/directories.
## Fix Focus Areas
- .issue-comment-2064.md[1-61]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
src/plugin/event.ts
Outdated
| const SESSION_LIFECYCLE = ["session.idle", "session.created", "session.deleted", "session.status", "session.error", "session.updated"]; | ||
| const MESSAGE_EVENTS = ["message.updated", "message.part.updated"]; | ||
| const HOOK_SUBSCRIPTIONS: Record<string, string[] | "*"> = { | ||
| // ALL events including deltas (transcript tracking, streaming output monitoring) | ||
| claudeCodeHooks: "*", | ||
| interactiveBashSession: "*", | ||
| // Session lifecycle only | ||
| sessionNotification: SESSION_LIFECYCLE, | ||
| unstableAgentBabysitter: SESSION_LIFECYCLE, | ||
| runtimeFallback: SESSION_LIFECYCLE, | ||
| agentUsageReminder: SESSION_LIFECYCLE, | ||
| categorySkillReminder: SESSION_LIFECYCLE, | ||
| ralphLoop: SESSION_LIFECYCLE, | ||
| stopContinuationGuard: SESSION_LIFECYCLE, | ||
| backgroundNotificationHook: SESSION_LIFECYCLE, | ||
| autoUpdateChecker: SESSION_LIFECYCLE, | ||
| // Message events (no deltas) | ||
| contextWindowMonitor: [...MESSAGE_EVENTS, "session.status"], | ||
| anthropicContextWindowLimitRecovery: MESSAGE_EVENTS, | ||
| compactionTodoPreserver: MESSAGE_EVENTS, | ||
| writeExistingFileGuard: MESSAGE_EVENTS, | ||
| todoContinuationEnforcer: MESSAGE_EVENTS, | ||
| atlasHook: MESSAGE_EVENTS, | ||
| // Chat-level events | ||
| directoryAgentsInjector: ["session.created", "message.updated"], | ||
| directoryReadmeInjector: ["session.created", "message.updated"], | ||
| rulesInjector: ["session.created", "message.updated"], | ||
| thinkMode: ["session.created", "message.updated"], | ||
| }; |
There was a problem hiding this comment.
4. Broken hook subscriptions 🐞 Bug ✓ Correctness
HOOK_SUBSCRIPTIONS is used to skip hook invocations, but multiple hooks are subscribed to too-narrow event sets that omit event types their own handlers explicitly depend on. This will prevent core behaviors (todo continuation, auto-compact recovery, atlas continuation, runtime/background fallback, notifications) and some cleanups from ever running.
Agent Prompt
## Issue description
`src/plugin/event.ts` now filters hook invocations via `HOOK_SUBSCRIPTIONS`, but many hooks are subscribed to event sets that omit event types their own handlers explicitly process (e.g. `session.idle`, `session.error`, `session.compacted`, `session.stop`, `tool.execute.*`). This makes key hook logic unreachable and breaks functionality/cleanup.
## Issue Context
The dispatcher enforces the subscription list (`if (subs !== "*" && !subs.includes(eventType)) continue;`). Several hook implementations contain branches for events that are not in their configured subscription arrays.
## Fix Focus Areas
- src/plugin/event.ts[140-168]
- src/plugin/event.ts[199-208]
- src/hooks/todo-continuation-enforcer/handler.ts[32-68]
- src/hooks/anthropic-context-window-limit-recovery/recovery-hook.ts[39-140]
- src/hooks/atlas/event-handler.ts[26-43]
- src/hooks/atlas/event-handler.ts[178-204]
- src/hooks/runtime-fallback/event-handler.ts[190-195]
- src/hooks/session-notification.ts[136-185]
- src/features/background-agent/manager.ts[693-755]
## Suggested direction
1. Expand `SESSION_LIFECYCLE` to include at least `session.compacted` and `session.stop` (and any other session.* events used by hooks).
2. Fix individual subscriptions, e.g.:
- `todoContinuationEnforcer`: include `session.idle`, `session.error`, `session.deleted`, and tool events it watches.
- `anthropicContextWindowLimitRecovery`: include `session.error`, `session.idle`, `session.deleted`.
- `atlasHook`: include `session.idle`, `session.error`, `session.deleted`, `session.compacted`, and `tool.execute.*`.
- `runtimeFallback`: include `message.updated` and `session.stop`.
- `sessionNotification`: include `message.updated` and `tool.execute.*`.
- `backgroundNotificationHook`: likely `"*"` (it routes all events to `BackgroundManager`).
3. Alternatively (often simpler/safer): only special-case filtering for `message.part.delta` by maintaining a small allowlist of hooks for that single high-frequency event, and let all hooks receive other event types as before.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Code Review
This pull request introduces a wide range of improvements, with the primary goal of optimizing performance through a new event dispatching mechanism. While the core dispatcher code was not in the provided diffs, the other changes are substantial and enhance the project's robustness, feature set, and maintainability.
Key changes include:
- A critical bug fix in the configuration merging logic to correctly respect user overrides.
- Significant robustness improvements, such as adding an AVX2 fallback for native binaries and implementing watchdog timers and backoff mechanisms for long-running processes.
- New features, most notably support for Gemini models with tailored prompts and integration for git worktrees.
- Numerous refactorings that improve code structure, such as centralizing error classification and event handling logic.
My review identified one medium-severity issue regarding a schema change that weakens configuration validation. Overall, this is a high-quality contribution with many valuable enhancements.
| default_run_agent: z.string().optional(), | ||
| disabled_mcps: z.array(AnyMcpNameSchema).optional(), | ||
| disabled_agents: z.array(BuiltinAgentNameSchema).optional(), | ||
| disabled_agents: z.array(z.string()).optional(), |
There was a problem hiding this comment.
Changing disabled_agents from z.array(BuiltinAgentNameSchema) to z.array(z.string()) weakens configuration validation. This means typos in agent names will no longer be caught by the schema, potentially leading to agents not being disabled as expected. If the list of agents is now dynamic, it would be helpful to add a .describe() to the schema explaining this and pointing to where users can find the list of valid agent names. If the list is still static, restoring the stricter schema (BuiltinAgentNameSchema) would be preferable for user config safety.
…h dispatch overhead Replace sequential all-hooks dispatch with subscription-based filtering. Each hook declares which event types it cares about via HOOK_SUBSCRIPTIONS. Critical hooks (claudeCodeHooks, stopContinuationGuard, writeExistingFileGuard) remain awaited; 18 other hooks fire-and-forget with error logging. During LLM streaming (~100 message.part.delta events/sec), 19 of 21 hooks are now skipped entirely, reducing async dispatch from ~2100/sec to ~200/sec.
3d851eb to
39eabd8
Compare
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/plugin/event.ts (1)
175-197: Avoid silent wildcard fallback for unmapped hooks.Using
HOOK_SUBSCRIPTIONS[name] ?? "*"hides missed mappings and can silently regress dispatch filtering. Consider surfacing missing entries (e.g., startup warning or assertion) so new hooks are explicitly classified.♻️ Suggested hardening
- ] as [string, HookInvoker][]).map(([name, fn]) => [name, fn, HOOK_SUBSCRIPTIONS[name] ?? "*"] as [string, HookInvoker, string[] | "*"]); + ] as [string, HookInvoker][]).map(([name, fn]) => { + const subs = HOOK_SUBSCRIPTIONS[name]; + if (!subs) { + log("[event] missing HOOK_SUBSCRIPTIONS entry; defaulting to '*'", { hook: name }); + } + return [name, fn, subs ?? "*"] as [string, HookInvoker, string[] | "*"]; + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugin/event.ts` around lines 175 - 197, The current hookEntries builder silently falls back to "*" via HOOK_SUBSCRIPTIONS[name] ?? "*" which hides missing mappings; update the map that constructs hookEntries to explicitly detect when HOOK_SUBSCRIPTIONS[name] is undefined and surface it (e.g., call a startup assertion or emit a clear processLogger.warn/error mentioning the hook name) instead of defaulting to "*", and then use an explicit default like an empty array or fail-fast in development; change the expression HOOK_SUBSCRIPTIONS[name] ?? "*" to an inline check (e.g., const subs = HOOK_SUBSCRIPTIONS[name]; if (!subs) { processLogger.warn(`Missing HOOK_SUBSCRIPTIONS entry for ${name}`); /* or throw new Error(...) */ } ) and return subs or [] accordingly so new hooks cannot regress dispatch filtering silently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/plugin/event.ts`:
- Around line 205-207: The current fire-and-forget path calls invoke(input)
directly so a synchronous throw escapes the .catch; change the wrapping to start
the chain with Promise.resolve().then(() => invoke(input)).catch(err =>
log("[hook] error:", { hook: name, error: err })), so both sync and async errors
from invoke are caught; update the else block that references invoke, log, name,
and input accordingly (do not await the promise).
---
Nitpick comments:
In `@src/plugin/event.ts`:
- Around line 175-197: The current hookEntries builder silently falls back to
"*" via HOOK_SUBSCRIPTIONS[name] ?? "*" which hides missing mappings; update the
map that constructs hookEntries to explicitly detect when
HOOK_SUBSCRIPTIONS[name] is undefined and surface it (e.g., call a startup
assertion or emit a clear processLogger.warn/error mentioning the hook name)
instead of defaulting to "*", and then use an explicit default like an empty
array or fail-fast in development; change the expression
HOOK_SUBSCRIPTIONS[name] ?? "*" to an inline check (e.g., const subs =
HOOK_SUBSCRIPTIONS[name]; if (!subs) { processLogger.warn(`Missing
HOOK_SUBSCRIPTIONS entry for ${name}`); /* or throw new Error(...) */ } ) and
return subs or [] accordingly so new hooks cannot regress dispatch filtering
silently.
…cription map - Add session.compacted to SESSION_LIFECYCLE (it's a lifecycle event) - Add session.deleted to hooks that do cleanup: contextWindowMonitor, compactionTodoPreserver, writeExistingFileGuard, todoContinuationEnforcer, directoryReadmeInjector, thinkMode - Add session.compacted to hooks that reset state on compaction: compactionTodoPreserver, atlasHook, directoryAgentsInjector, directoryReadmeInjector, rulesInjector - Fix sync throw escaping fire-and-forget catch by wrapping invoke in Promise.resolve().then() instead of Promise.resolve(invoke())
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/plugin/event.ts (1)
199-208: Sync-throw handling is now correct; consider addingvoidprefix.The fix using
Promise.resolve().then(() => invoke(input))properly catches synchronous throws sinceinvokeis called inside the.then()callback. However, the fire-and-forget promise at line 206 lacks an explicitvoidprefix, which some linters flag as an unhandled promise.✨ Optional: Add void prefix for explicit intent
} else { - Promise.resolve().then(() => invoke(input)).catch((err) => log("[hook] error:", { hook: name, error: err })); + void Promise.resolve().then(() => invoke(input)).catch((err) => log("[hook] error:", { hook: name, error: err })); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugin/event.ts` around lines 199 - 208, In dispatchToHooks, for the fire-and-forget branch (when a hook name is not in AWAITED_HOOKS) prefix the Promise.resolve().then(() => invoke(input)).catch(...) call with void to mark the returned promise as intentionally unawaited; locate the loop over hookEntries and the non-awaited branch that currently calls invoke(input) inside Promise.resolve().then and change it to use a void prefix while preserving the .catch handler so synchronous throws are still caught.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/plugin/event.ts`:
- Around line 199-208: In dispatchToHooks, for the fire-and-forget branch (when
a hook name is not in AWAITED_HOOKS) prefix the Promise.resolve().then(() =>
invoke(input)).catch(...) call with void to mark the returned promise as
intentionally unawaited; locate the loop over hookEntries and the non-awaited
branch that currently calls invoke(input) inside Promise.resolve().then and
change it to use a void prefix while preserving the .catch handler so
synchronous throws are still caught.
Summary
HOOK_SUBSCRIPTIONSmapping: each of 21 hooks declares which event types it cares aboutmessage.part.deltaevents at ~100/sec), 19 of 21 hooks are now skipped entirelyclaudeCodeHooks,stopContinuationGuard,writeExistingFileGuard) are awaited; 18 others fire-and-forget with.catch(log)error handlingProblem
The event dispatcher in
event.tsdispatches every event (including high-frequencymessage.part.deltatokens) to all 21 hooks sequentially withawait. At ~100 delta events/sec, this means ~2,100 async hook invocations per second. Most hooks don't care about delta events at all.Combined with:
reconcile()O(n) deep-diffing per token (separate PR)This creates enough GC pressure to trigger the bmalloc hang, freezing the TUI.
Changes
src/plugin/event.tsNew constants:
SESSION_LIFECYCLE: event types for session-level hooksMESSAGE_EVENTS: event types for message-level hooks (no deltas)HOOK_SUBSCRIPTIONS: maps each hook name → subscribed event types or"*"(all)AWAITED_HOOKS: Set of 3 hooks that must be awaitedNew dispatcher (
dispatchToHooks):await Promise.resolve(invoke(input))Promise.resolve(invoke(input)).catch(err => log(...))Hook subscription mapping:
claudeCodeHooks*(all events)interactiveBashSession*(all events)sessionNotificationunstableAgentBabysitterruntimeFallbackcontextWindowMonitorwriteExistingFileGuardtodoContinuationEnforcerdirectoryAgentsInjectorPart of a 3-layer fix
Verification
bun x tsc --noEmit— clean, no errors)Summary by cubic
Adds subscription-based event filtering so only relevant hooks run per event, and completes lifecycle/cleanup subscriptions. This cuts hot-path dispatch during LLM token streaming (message.part.delta) from ~2,100 to ~200 async calls/sec and reduces GC pressure and UI stalls.
Refactors
Bug Fixes
Written for commit cae1645. Summary will update on new commits.
Summary by CodeRabbit