feat: CompanyOS / Switchboard backend implementation#23105
Conversation
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 9 minutes and 13 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (67)
WalkthroughA comprehensive Go-To-Market (GTM) module is introduced with database schema, service layer classes, REST API routes, and authentication middleware. GitHub Actions workflows are updated to use Corepack for pnpm installation instead of the Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Express as Express Route
participant AutoSvc as AutomationService
participant PipelineSvc as PipelineService
participant TaskSvc as TaskService
participant KpiSvc as KpiService
participant DashboardSvc as DashboardService
participant DB as Database
Client->>Express: POST /actions/daily-command-brief
activate Express
Express->>AutoSvc: generateDailyCommandBrief(tenantId, userId)
activate AutoSvc
AutoSvc->>DB: claimAutomationRun (idempotent)
activate DB
DB-->>AutoSvc: automation run record or null
deactivate DB
alt Run already claimed
AutoSvc->>DB: SELECT existing run
DB-->>AutoSvc: return cached result
else First claim
par Fetch data concurrently
AutoSvc->>PipelineSvc: atRisk(tenantId)
PipelineSvc->>DB: query at-risk opportunities
DB-->>PipelineSvc: opportunities
PipelineSvc-->>AutoSvc: at-risk data
and
AutoSvc->>TaskSvc: list(tenantId)
TaskSvc->>DB: query tasks
DB-->>TaskSvc: tasks
TaskSvc-->>AutoSvc: tasks
and
AutoSvc->>KpiSvc: offTrack(tenantId)
KpiSvc->>DB: query off-track KPIs
DB-->>KpiSvc: KPIs
KpiSvc-->>AutoSvc: KPI data
end
AutoSvc->>DB: UPDATE automation_run SET status='completed', run_output=JSON
AutoSvc->>DashboardSvc: refreshSnapshot()
DashboardSvc->>DB: REFRESH MATERIALIZED VIEW
DB-->>DashboardSvc: done
end
AutoSvc-->>Express: result object
deactivate AutoSvc
Express-->>Client: 200 JSON response
deactivate Express
Client->>Express: POST /actions/momentum-recovery
activate Express
Express->>AutoSvc: runMomentumRecovery(tenantId, userId)
activate AutoSvc
AutoSvc->>TaskSvc: recoveryCandidates(tenantId)
TaskSvc->>DB: query 15-min low-energy tasks
DB-->>TaskSvc: candidate tasks
TaskSvc-->>AutoSvc: candidates
AutoSvc->>DB: claimAutomationRun (hourly idempotency)
DB-->>AutoSvc: automation run or null
alt Run already claimed
AutoSvc-->>Express: { reused: true, selected }
else First claim
AutoSvc->>DB: INSERT INTO gtm_recovery_log
DB-->>AutoSvc: logged
AutoSvc-->>Express: { selectedTask, focusMode: 'Recovery' }
end
deactivate AutoSvc
deactivate Express
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
There was a problem hiding this comment.
Actionable comments posted: 15
🧹 Nitpick comments (8)
.github/actions/setup-turbo/action.yml (1)
5-10: Add Node.js prerequisite check before using Corepack (unused action).This action currently isn't used in any workflows, but
corepack enableon line 8 requires Node.js to be pre-installed. If this action is called in the future, it will fail without upstream Node.js setup.Consider adding a verification step:
Suggested hardening
+ - name: Verify Node.js is available + shell: bash + run: node --version + - name: Enable corepack and install pnpm shell: bash run: | corepack enable corepack install --global pnpm@9.15.4🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/actions/setup-turbo/action.yml around lines 5 - 10, The action runs "corepack enable", "corepack install --global pnpm@9.15.4" and "pnpm i --frozen-lockfile" but lacks a Node.js prerequisite check; add a preflight that ensures Node is available (either insert an actions/setup-node step to install a supported Node version or run a simple verification like "node --version" / "which node" and fail with a clear message) before invoking "corepack enable" so the action won't error when Node is not present.server/src/gtm/services/KpiService.ts (1)
11-36: Run independent queries in parallel to reduce latency.The three aggregate queries (
q1,q2,q3) are independent and can be executed concurrently usingPromise.all, reducing total execution time.♻️ Proposed parallel execution
async deriveAll(tenantId: string) { - const q1 = await this.one<{ value: number }>( + const [q1, q2, q3] = await Promise.all([ + this.one<{ value: number }>( `SELECT COUNT(*)::int AS value FROM maestro.gtm_meetings WHERE tenant_id = $1 AND date_time >= date_trunc('week', now())`, [tenantId], - ).catch(() => ({ value: 0 })); - - const q2 = await this.one<{ value: number }>( + ).catch(() => ({ value: 0 })), + this.one<{ value: number }>( `SELECT COUNT(*)::int AS value FROM maestro.gtm_meetings WHERE tenant_id = $1 AND follow_up_due IS NOT NULL AND next_step_confirmed = true AND meeting_outcome IN ('Great','Good') AND updated_at >= now() - interval '7 days'`, [tenantId], - ).catch(() => ({ value: 0 })); - - const q3 = await this.one<{ value: number }>( + ).catch(() => ({ value: 0 })), + this.one<{ value: number }>( `SELECT COALESCE(SUM(weighted_value),0)::numeric AS value FROM maestro.gtm_pipeline WHERE tenant_id = $1 AND created_at >= date_trunc('month', now())`, [tenantId], - ).catch(() => ({ value: 0 })); + ).catch(() => ({ value: 0 })), + ]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/gtm/services/KpiService.ts` around lines 11 - 36, The three independent DB calls in deriveAll (the this.one invocations assigned to q1, q2, q3) should be executed concurrently to reduce latency: invoke the three this.one(...) promises in parallel using Promise.all (or Promise.allSettled) and then assign q1/q2/q3 from the results, preserving the existing fallback to { value: 0 } for failed queries; update deriveAll to build the three SQL promise expressions (same queries) and await them together before continuing.server/src/gtm/sql/20260405_0001_gtm_core.up.sql (2)
72-74: Generated column assumes probability is 0-100 scale.The formula
estimated_value * probability_to_close / 100.0expects probability as a percentage (e.g., 50 for 50%). If probability is stored as a decimal (e.g., 0.5), the result will be incorrect. Document the expected scale or add a CHECK constraint.📝 Optional: Add CHECK constraint to enforce scale
probability_to_close NUMERIC CHECK (probability_to_close >= 0 AND probability_to_close <= 100),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/gtm/sql/20260405_0001_gtm_core.up.sql` around lines 72 - 74, The generated column weighted_value uses the expression COALESCE(estimated_value,0) * COALESCE(probability_to_close,0) / 100.0 which assumes probability_to_close is a 0–100 percentage; either document that scale or enforce it and/or adjust the formula: add a CHECK constraint on probability_to_close (e.g., CHECK (probability_to_close >= 0 AND probability_to_close <= 100)) to guarantee percentage values, or change the calculation to divide by 1.0 (or multiply by probability_to_close directly) if probability_to_close is stored as a decimal 0.0–1.0; update the schema around the probability_to_close and weighted_value definitions accordingly.
175-187:gtm_founder_statemay benefit from an index onentry_date.The
FounderStateService.today()method queriesWHERE entry_date = CURRENT_DATE. An index on(tenant_id, entry_date)would optimize this lookup, especially as the table grows.📊 Suggested index
CREATE INDEX idx_gtm_founder_state_tenant_date ON maestro.gtm_founder_state (tenant_id, entry_date);This may already be covered by the UNIQUE constraint on line 186.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/gtm/sql/20260405_0001_gtm_core.up.sql` around lines 175 - 187, The table maestro.gtm_founder_state is frequently queried by FounderStateService.today() using WHERE entry_date = CURRENT_DATE; add an index on (tenant_id, entry_date) to speed that lookup by creating idx_gtm_founder_state_tenant_date on maestro.gtm_founder_state (tenant_id, entry_date), or explicitly confirm in the migration that the existing UNIQUE (tenant_id, entry_date) constraint will provide the same btree index before skipping the separate CREATE INDEX.server/src/gtm/services/FounderStateService.ts (1)
4-10: Consider usingonewith error handling for single-row queries.Using
manywithLIMIT 1and then extractingrows[0]works, but theonemethod is semantically clearer for single-row lookups. Also, typing asanyloses type safety.♻️ Proposed refactor using `one` with catch
- async today(tenantId: string) { - const rows = await this.many<any>( + async today(tenantId: string): Promise<FounderState | null> { + return this.one<FounderState>( `SELECT * FROM maestro.gtm_founder_state WHERE tenant_id = $1 AND entry_date = CURRENT_DATE LIMIT 1`, [tenantId], - ); - return rows[0] ?? null; + ).catch(() => null); }Where
FounderStateis a proper interface matching the table schema.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/gtm/services/FounderStateService.ts` around lines 4 - 10, The today method uses many<any> with LIMIT 1 which loses type safety and intent; change it to use this.one<FounderState>(...) for a single-row fetch in the today function and add error handling to return null when no row is found (catch the DB "no data" / QueryResultError or equivalent), ensuring you define/use a FounderState interface matching the table schema instead of any; keep the same SQL and parameter, but replace rows[0] logic with the typed one call plus a catch that returns null.server/src/gtm/services/ArtifactService.ts (1)
11-44: Use a typed interface instead ofanyfor the input parameter.The
input: anytype bypasses compile-time checks. Even with Zod validation at the route layer, a typed parameter here would catch mismatches between the schema and service.♻️ Proposed typed interface
interface CreateArtifactInput { name: string; artifactId: string; type: string; audience: string[]; funnelStage: string; primaryUse: string; status: string; format: string; url?: string; owner?: string; freshnessSlaDays: number; performanceScore: number; usageNotes?: string; winSignals?: string; lossSignals?: string; anchorNarrativeIncluded: boolean; categoryNarrative?: string; canonicalVersion: boolean; kpiIds: string[]; } async create(tenantId: string, input: CreateArtifactInput) { ... }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/gtm/services/ArtifactService.ts` around lines 11 - 44, Replace the untyped input parameter in the create method with a proper TypeScript interface: define a CreateArtifactInput interface matching the expected fields (name, artifactId, type, audience, funnelStage, primaryUse, status, format, optional url/owner/usageNotes/winSignals/lossSignals/categoryNarrative, freshnessSlaDays, performanceScore, anchorNarrativeIncluded, canonicalVersion, kpiIds) and then change the create(tenantId: string, input: any) signature to create(tenantId: string, input: CreateArtifactInput) so the service benefits from compile-time checks; keep the same field order and null-coalescing behavior used inside the create method when mapping fields to the SQL parameter array.server/src/gtm/services/BaseService.ts (1)
6-10: Type assertion without runtime validation.The
as Tcast trusts that the database row matches the expected type. This is a common pattern, but callers must ensure their queries return the expected shape. Consider documenting this contract or using Zod parsing at the call sites for critical paths.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/gtm/services/BaseService.ts` around lines 6 - 10, The one<T> method in BaseService performs a blind type assertion (returning result.rows[0] as T) which can hide mismatches between DB rows and expected shapes; update callers of BaseService.one<T> (or the method itself) to perform runtime validation: either add explicit Zod/schema parsing at call sites before using the returned value or change one<T> to accept an optional parser/validator (e.g., a Zod schema or validate function) and throw a descriptive error on parse failure; also add a short doc comment on BaseService.one describing the contract that callers are responsible for validating the returned shape if no validator is provided.server/src/middleware/requireUser.ts (1)
14-17: Consider a custom HTTP error class for cleaner status code handling.The
@ts-expect-errorescape hatch works but obscures the intent. A custom error class with astatusproperty would be type-safe and self-documenting.♻️ Example custom error class
// server/src/errors/HttpError.ts export class HttpError extends Error { constructor(public status: number, message: string) { super(message); this.name = 'HttpError'; } } // In requireUser.ts import { HttpError } from '../errors/HttpError.js'; export function requireUser(req: Request, _res: Response, next: NextFunction) { if (!req.user?.id || !req.tenantId) { throw new HttpError(401, 'Unauthorized'); } next(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/src/middleware/requireUser.ts` around lines 14 - 17, Replace the error hack in requireUser by introducing a typed HttpError class and using it instead of mutating Error: add a class HttpError extends Error with a public status: number and proper constructor (e.g., class HttpError { constructor(public status: number, message: string) {...} }), export it, import it into requireUser, remove the `@ts-expect-error`, and throw new HttpError(401, 'Unauthorized') inside the requireUser function so status codes are type-safe and self-documenting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/actions/setup-pnpm/action.yml:
- Around line 17-27: Move the "Setup Node.js" step (uses: actions/setup-node@v4
with node-version: ${{ inputs.node-version }}) to run before the "Enable
corepack and install pnpm" step so corepack runs against the configured Node.js;
update action.yml to place the step that runs corepack enable and corepack
install --global pnpm@9.15.4 after the Setup Node.js step and keep the shell:
bash/run commands and node-version input unchanged.
In @.github/actions/setup-toolchain/action.yml:
- Around line 23-33: Move the "Setup Node.js" step (uses: actions/setup-node@v4
with node-version: ${{ inputs.node-version }}) above the "Enable corepack and
install pnpm" step so corepack runs with the requested Node version, and update
the corepack install command to use the pnpm input instead of a hardcoded
version by replacing pnpm@9.15.4 with pnpm@${{ inputs.pnpm-version || '9.15.4'
}} (keep the step name "Enable corepack and install pnpm" and the corepack
commands corepack enable / corepack install --global pnpm@...).
In @.github/actions/setup/action.yml:
- Around line 20-30: Move the "Enable corepack and install pnpm" step so it runs
after the "Setup Node.js" step (the step using actions/setup-node@v4 with
node-version: ${{ inputs.node-version }}); this ensures corepack enable and
corepack install --global pnpm@9.15.4 execute with the configured Node.js
version. Keep the step name "Enable corepack and install pnpm", the shell: bash
and the run commands (corepack enable; corepack install --global pnpm@9.15.4)
but place that entire step below the "Setup Node.js" step in the action.yml
workflow.
In `@server/src/gtm/routes/gtmRoutes.ts`:
- Around line 51-55: The route handler registered with router.get('/tasks',
asyncHandler(...)) currently treats any string in req.query.energy as valid and
calls deps.taskService.forEnergy, causing unknown filters like ?energy=foo to
quietly return an empty list; change the handler to validate req.query.energy
against the allowed energy values (enumeration or known set) before calling
deps.taskService.forEnergy, and if the value is not in that allowed set respond
with a 400 Bad Request (with a clear error message) instead of delegating to
forEnergy—keep using req.tenantId!, deps.taskService.forEnergy and
deps.taskService.list but gate the call with the whitelist check.
- Around line 46-49: The route handler for router.post('/pipeline/:id/advance')
is just casting req.body to extract nextStage and lacks validation; update the
route boundary (the asyncHandler callback) to validate that req.body.nextStage
exists and is a non-empty string (or use the same validation middleware/pattern
used by other POST routes) and return a 400/validation error if missing or
invalid, before calling deps.pipelineService.advanceStage with req.tenantId,
req.params.id, nextStage, req.user!.id.
In `@server/src/gtm/services/ArtifactService.ts`:
- Around line 4-9: The list(tenantId: string) method issues an unbounded SELECT
which can return huge result sets; modify the ArtifactService.list signature to
accept pagination parameters (e.g., pageSize and pageToken/offset or limit and
offset), add validation and a sensible default/maximum for pageSize, change the
SQL in the call to this.many to include LIMIT $2 and OFFSET $3 (or use a
cursor/updated_at > $2 pattern if using cursor pagination), pass those values in
the parameter array, and update any callers to supply/propagate the new
pagination args so results are bounded and memory-safe when querying
maestro.gtm_artifacts.
In `@server/src/gtm/services/AutomationService.ts`:
- Around line 36-67: In generateDailyCommandBrief, a downstream exception leaves
the automation run stuck as 'running' and causes the function to return the raw
automation row on subsequent calls; wrap the main work after
claimAutomationRun(...) in a try/catch: in the try branch run the Promise.all,
compute result and perform the existing UPDATE to set status='completed',
run_output and finished_at; in the catch branch UPDATE the same
maestro.gtm_automation_runs row to set status='failed', finished_at=now() and
run_output to a JSON error object (include error.message/stack), then rethrow or
return an appropriate error; also change the early-return path (the branch when
claim is falsy) to SELECT the existing run by idempotency_key and only
return/parsing run_output when that row's status = 'completed' (otherwise
proceed to attempt a new run or return a consistent error), so
claimAutomationRun, generateDailyCommandBrief, and the UPDATE statements are the
referenced loci to modify.
- Around line 70-87: In runMomentumRecovery, call claimAutomationRun before
selecting a candidate and then persist the chosen result into the
gtm_automation_runs row (including run_output, status/completed timestamp or
failed) so subsequent conflicting claims return the stored output; specifically,
move the computation of selected (currently candidates[0] ?? null) to after the
claim succeeds, write the selectedTask/focusMode (and any reason/log) into the
existing gtm_automation_runs entry for the key, and ensure claimAutomationRun
returns/reloads the stored run_output when it indicates a reused run so the
method returns the original selected task instead of recalculating.
In `@server/src/gtm/services/DashboardService.ts`:
- Around line 4-9: getSnapshot currently calls this.one which throws when no row
exists, causing errors for new tenants; modify getSnapshot to handle missing
snapshots by using a non-throwing query helper (e.g., this.oneOrNone or
equivalent) or catch the specific "Not found" error and return null; update the
method signature/return to allow null and ensure callers of getSnapshot (search
for getSnapshot usages) handle a null result or trigger snapshot creation if
required.
In `@server/src/gtm/services/KpiService.ts`:
- Line 17: The current .catch(() => ({ value: 0 })) calls in KpiService silently
swallow all errors; update those catch handlers inside the KpiService methods to
either (1) catch only the expected "no rows" / empty-result condition and return
{ value: 0 }, or (2) log the real error via the service logger (e.g.,
this.logger.error(...) or similar) and rethrow or return a safe default; change
the three occurrences (the one shown plus the ones referenced at 28 and 35) so
they do not suppress connection/query errors without logging, and ensure the log
includes the caught error and context (method name and query identifier) to aid
debugging.
In `@server/src/gtm/services/PipelineService.ts`:
- Around line 68-84: The advanceStage function currently reads the current stage
then updates it in separate statements causing race conditions; make the
operation atomic by wrapping it in a transaction and either (a) SELECT ... FOR
UPDATE the row (use the same query that reads into `current` inside a
transaction) then validate `ALLOWED_STAGE_TRANSITIONS[current.stage]` and
perform the UPDATE, or (b) perform a conditional UPDATE that includes the
previous stage in the WHERE clause (e.g. WHERE tenant_id = $1 AND id = $2 AND
stage = $4) and check affected rows to detect a concurrent change; update the
code paths around `advanceStage`, `current`, `updated`, and the calls to
`this.one` to use the chosen transactional/conditional approach and throw the
invalid-transition error when the condition fails.
In `@server/src/gtm/sql/20260405_0001_gtm_core.up.sql`:
- Around line 8-29: Add B-tree indexes for tenant_id on the listed GTM tables in
the follow-on migration so multi-tenant queries are not forced to full scans:
add CREATE INDEX statements for maestro.gtm_kpis (tenant_id),
maestro.gtm_script_outcomes (tenant_id), maestro.gtm_founder_state (tenant_id),
maestro.gtm_artifact_usage (tenant_id), and maestro.gtm_recovery_log (tenant_id)
in 20260405_0002_gtm_views_and_indexes.up.sql; where queries commonly filter by
another column (e.g., kpi_id or script_id), prefer composite indexes like
(tenant_id, kpi_id) or (tenant_id, script_id) accordingly, and ensure matching
DROP INDEX statements are added to the corresponding .down migration.
In `@server/src/gtm/sql/20260405_0002_gtm_views_and_indexes.up.sql`:
- Line 28: The summed weighted_pipeline expression can return NULL for tenants
with no pipeline rows; wrap the existing expression sum(weighted_value) FILTER
(WHERE section = 'weighted_pipeline') in COALESCE(..., 0) so the alias
weighted_pipeline always yields 0 instead of NULL, preserving the snapshot shape
and numeric dashboard metric.
In `@server/src/gtm/validators/gtmSchemas.ts`:
- Around line 60-62: The schema currently allows arbitrary strings for duration,
leverage, and status (in gtmSchemas.ts) which leads to silent mismatches with
downstream consumers; update the zod schema for the fields duration, leverage,
and status to validate against the shared constant vocabularies (or explicit
z.literal/z.union of allowed values) used by TaskService.forEnergy and
TaskService.recoveryCandidates and the snapshot view, so only exact values like
"Ready", "Backlog", "In Progress", "Blocked", "15 min", etc. pass validation;
reference the shared constants/enums (or export them if missing) and use them in
the zod definitions to ensure normalized/case-sensitive inputs at the schema
boundary.
---
Nitpick comments:
In @.github/actions/setup-turbo/action.yml:
- Around line 5-10: The action runs "corepack enable", "corepack install
--global pnpm@9.15.4" and "pnpm i --frozen-lockfile" but lacks a Node.js
prerequisite check; add a preflight that ensures Node is available (either
insert an actions/setup-node step to install a supported Node version or run a
simple verification like "node --version" / "which node" and fail with a clear
message) before invoking "corepack enable" so the action won't error when Node
is not present.
In `@server/src/gtm/services/ArtifactService.ts`:
- Around line 11-44: Replace the untyped input parameter in the create method
with a proper TypeScript interface: define a CreateArtifactInput interface
matching the expected fields (name, artifactId, type, audience, funnelStage,
primaryUse, status, format, optional
url/owner/usageNotes/winSignals/lossSignals/categoryNarrative, freshnessSlaDays,
performanceScore, anchorNarrativeIncluded, canonicalVersion, kpiIds) and then
change the create(tenantId: string, input: any) signature to create(tenantId:
string, input: CreateArtifactInput) so the service benefits from compile-time
checks; keep the same field order and null-coalescing behavior used inside the
create method when mapping fields to the SQL parameter array.
In `@server/src/gtm/services/BaseService.ts`:
- Around line 6-10: The one<T> method in BaseService performs a blind type
assertion (returning result.rows[0] as T) which can hide mismatches between DB
rows and expected shapes; update callers of BaseService.one<T> (or the method
itself) to perform runtime validation: either add explicit Zod/schema parsing at
call sites before using the returned value or change one<T> to accept an
optional parser/validator (e.g., a Zod schema or validate function) and throw a
descriptive error on parse failure; also add a short doc comment on
BaseService.one describing the contract that callers are responsible for
validating the returned shape if no validator is provided.
In `@server/src/gtm/services/FounderStateService.ts`:
- Around line 4-10: The today method uses many<any> with LIMIT 1 which loses
type safety and intent; change it to use this.one<FounderState>(...) for a
single-row fetch in the today function and add error handling to return null
when no row is found (catch the DB "no data" / QueryResultError or equivalent),
ensuring you define/use a FounderState interface matching the table schema
instead of any; keep the same SQL and parameter, but replace rows[0] logic with
the typed one call plus a catch that returns null.
In `@server/src/gtm/services/KpiService.ts`:
- Around line 11-36: The three independent DB calls in deriveAll (the this.one
invocations assigned to q1, q2, q3) should be executed concurrently to reduce
latency: invoke the three this.one(...) promises in parallel using Promise.all
(or Promise.allSettled) and then assign q1/q2/q3 from the results, preserving
the existing fallback to { value: 0 } for failed queries; update deriveAll to
build the three SQL promise expressions (same queries) and await them together
before continuing.
In `@server/src/gtm/sql/20260405_0001_gtm_core.up.sql`:
- Around line 72-74: The generated column weighted_value uses the expression
COALESCE(estimated_value,0) * COALESCE(probability_to_close,0) / 100.0 which
assumes probability_to_close is a 0–100 percentage; either document that scale
or enforce it and/or adjust the formula: add a CHECK constraint on
probability_to_close (e.g., CHECK (probability_to_close >= 0 AND
probability_to_close <= 100)) to guarantee percentage values, or change the
calculation to divide by 1.0 (or multiply by probability_to_close directly) if
probability_to_close is stored as a decimal 0.0–1.0; update the schema around
the probability_to_close and weighted_value definitions accordingly.
- Around line 175-187: The table maestro.gtm_founder_state is frequently queried
by FounderStateService.today() using WHERE entry_date = CURRENT_DATE; add an
index on (tenant_id, entry_date) to speed that lookup by creating
idx_gtm_founder_state_tenant_date on maestro.gtm_founder_state (tenant_id,
entry_date), or explicitly confirm in the migration that the existing UNIQUE
(tenant_id, entry_date) constraint will provide the same btree index before
skipping the separate CREATE INDEX.
In `@server/src/middleware/requireUser.ts`:
- Around line 14-17: Replace the error hack in requireUser by introducing a
typed HttpError class and using it instead of mutating Error: add a class
HttpError extends Error with a public status: number and proper constructor
(e.g., class HttpError { constructor(public status: number, message: string)
{...} }), export it, import it into requireUser, remove the `@ts-expect-error`,
and throw new HttpError(401, 'Unauthorized') inside the requireUser function so
status codes are type-safe and self-documenting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 06640ebd-7d3d-4093-9f7f-b4ed30bf7c44
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (20)
.github/actions/setup-pnpm/action.yml.github/actions/setup-toolchain/action.yml.github/actions/setup-turbo/action.yml.github/actions/setup/action.ymlserver/src/gtm/routes/gtmRoutes.tsserver/src/gtm/services/ArtifactService.tsserver/src/gtm/services/AutomationService.tsserver/src/gtm/services/BaseService.tsserver/src/gtm/services/DashboardService.tsserver/src/gtm/services/FounderStateService.tsserver/src/gtm/services/KpiService.tsserver/src/gtm/services/PipelineService.tsserver/src/gtm/services/TaskService.tsserver/src/gtm/sql/20260405_0001_gtm_core.up.sqlserver/src/gtm/sql/20260405_0002_gtm_views_and_indexes.up.sqlserver/src/gtm/sql/20260405_0003_gtm_automation.up.sqlserver/src/gtm/sql/20260405_0004_gtm_seed.up.sqlserver/src/gtm/types.tsserver/src/gtm/validators/gtmSchemas.tsserver/src/middleware/requireUser.ts
| - name: Enable corepack and install pnpm | ||
| shell: bash | ||
| run: | | ||
| corepack enable | ||
| corepack install --global pnpm@9.15.4 | ||
|
|
||
| - name: Setup Node.js | ||
| uses: actions/setup-node@v4 | ||
| with: | ||
| node-version: ${{ inputs.node-version }} | ||
| registry-url: "https://registry.npmjs.org" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether workflows pass `pnpm-version` to setup-toolchain (to assess backward-compat impact).
rg -n -C4 --glob '*.yml' --glob '*.yaml' 'uses:\s*\.\/\.github/actions/setup-toolchain|pnpm-version:'Repository: BrianCLong/summit
Length of output: 195
🏁 Script executed:
# First, locate and read the action.yml file
cat -n .github/actions/setup-toolchain/action.ymlRepository: BrianCLong/summit
Length of output: 1854
🏁 Script executed:
# Search for usages of setup-toolchain action in workflows (with corrected regex)
rg -n 'setup-toolchain' --glob '*.yml' --glob '*.yaml' -A 5 -B 2Repository: BrianCLong/summit
Length of output: 539
🏁 Script executed:
# Check if pnpm-version input is referenced anywhere in the action.yml or workflows
rg -n 'pnpm-version' --glob '*.yml' --glob '*.yaml' -B 2 -A 2Repository: BrianCLong/summit
Length of output: 43
Move setup-node before corepack installation and use inputs.pnpm-version.
The pnpm-version input is defined but never used—line 27 hardcodes pnpm@9.15.4. Additionally, running corepack before setup-node means corepack commands execute with whatever Node version is pre-installed on the runner, not the requested version, making the build behavior runner-dependent. Move the setup-node step before corepack and reference the input with a fallback default.
Suggested fix
- - name: Enable corepack and install pnpm
- shell: bash
- run: |
- corepack enable
- corepack install --global pnpm@9.15.4
-
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
registry-url: "https://registry.npmjs.org"
+
+ - name: Enable corepack and install pnpm
+ shell: bash
+ run: |
+ corepack enable
+ corepack install --global pnpm@${{ inputs.pnpm-version || '9.15.4' }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Enable corepack and install pnpm | |
| shell: bash | |
| run: | | |
| corepack enable | |
| corepack install --global pnpm@9.15.4 | |
| - name: Setup Node.js | |
| uses: actions/setup-node@v4 | |
| with: | |
| node-version: ${{ inputs.node-version }} | |
| registry-url: "https://registry.npmjs.org" | |
| - name: Setup Node.js | |
| uses: actions/setup-node@v4 | |
| with: | |
| node-version: ${{ inputs.node-version }} | |
| registry-url: "https://registry.npmjs.org" | |
| - name: Enable corepack and install pnpm | |
| shell: bash | |
| run: | | |
| corepack enable | |
| corepack install --global pnpm@${{ inputs.pnpm-version || '9.15.4' }} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/actions/setup-toolchain/action.yml around lines 23 - 33, Move the
"Setup Node.js" step (uses: actions/setup-node@v4 with node-version: ${{
inputs.node-version }}) above the "Enable corepack and install pnpm" step so
corepack runs with the requested Node version, and update the corepack install
command to use the pnpm input instead of a hardcoded version by replacing
pnpm@9.15.4 with pnpm@${{ inputs.pnpm-version || '9.15.4' }} (keep the step name
"Enable corepack and install pnpm" and the corepack commands corepack enable /
corepack install --global pnpm@...).
server/src/gtm/routes/gtmRoutes.ts
Outdated
| router.post('/pipeline/:id/advance', asyncHandler(async (req, res) => { | ||
| const { nextStage } = req.body as { nextStage: string }; | ||
| res.json(await deps.pipelineService.advanceStage(req.tenantId!, req.params.id, nextStage, req.user!.id)); | ||
| })); |
There was a problem hiding this comment.
Validate the advance-stage payload at the route boundary.
Line 47 just casts req.body; it does not validate it. A missing or misspelled nextStage falls into advanceStage as an ordinary transition failure instead of being rejected up front like the other POST routes.
🧪 Proposed fix
+import { z } from 'zod';
import { Router } from 'express';
import { asyncHandler } from '../../middleware/async-handler.js';
import { requireUser } from '../../middleware/requireUser.js';
import { artifactSchema, opportunitySchema, taskSchema } from '../validators/gtmSchemas.js';
@@
+const advanceStageSchema = z.object({
+ nextStage: opportunitySchema.shape.stage,
+});
+
export function buildGtmRoutes(deps: {
@@
router.post('/pipeline/:id/advance', asyncHandler(async (req, res) => {
- const { nextStage } = req.body as { nextStage: string };
+ const { nextStage } = advanceStageSchema.parse(req.body);
res.json(await deps.pipelineService.advanceStage(req.tenantId!, req.params.id, nextStage, req.user!.id));
}));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| router.post('/pipeline/:id/advance', asyncHandler(async (req, res) => { | |
| const { nextStage } = req.body as { nextStage: string }; | |
| res.json(await deps.pipelineService.advanceStage(req.tenantId!, req.params.id, nextStage, req.user!.id)); | |
| })); | |
| router.post('/pipeline/:id/advance', asyncHandler(async (req, res) => { | |
| const { nextStage } = advanceStageSchema.parse(req.body); | |
| res.json(await deps.pipelineService.advanceStage(req.tenantId!, req.params.id, nextStage, req.user!.id)); | |
| })); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/gtm/routes/gtmRoutes.ts` around lines 46 - 49, The route handler
for router.post('/pipeline/:id/advance') is just casting req.body to extract
nextStage and lacks validation; update the route boundary (the asyncHandler
callback) to validate that req.body.nextStage exists and is a non-empty string
(or use the same validation middleware/pattern used by other POST routes) and
return a 400/validation error if missing or invalid, before calling
deps.pipelineService.advanceStage with req.tenantId, req.params.id, nextStage,
req.user!.id.
server/src/gtm/routes/gtmRoutes.ts
Outdated
| router.get('/tasks', asyncHandler(async (req, res) => { | ||
| const energy = typeof req.query.energy === 'string' ? req.query.energy : undefined; | ||
| res.json(energy | ||
| ? await deps.taskService.forEnergy(req.tenantId!, energy) | ||
| : await deps.taskService.list(req.tenantId!)); |
There was a problem hiding this comment.
Reject unknown energy filters instead of returning a fake empty state.
Line 52 accepts any string, so ?energy=foo reaches forEnergy and comes back as an empty list. That is indistinguishable from “no matching tasks,” which makes client bugs harder to spot.
🧪 Proposed fix
+import { z } from 'zod';
import { Router } from 'express';
@@
+const taskQuerySchema = z.object({
+ energy: taskSchema.shape.energyTier.optional(),
+});
+
export function buildGtmRoutes(deps: {
@@
router.get('/tasks', asyncHandler(async (req, res) => {
- const energy = typeof req.query.energy === 'string' ? req.query.energy : undefined;
+ const { energy } = taskQuerySchema.parse(req.query);
res.json(energy
? await deps.taskService.forEnergy(req.tenantId!, energy)
: await deps.taskService.list(req.tenantId!));
}));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/gtm/routes/gtmRoutes.ts` around lines 51 - 55, The route handler
registered with router.get('/tasks', asyncHandler(...)) currently treats any
string in req.query.energy as valid and calls deps.taskService.forEnergy,
causing unknown filters like ?energy=foo to quietly return an empty list; change
the handler to validate req.query.energy against the allowed energy values
(enumeration or known set) before calling deps.taskService.forEnergy, and if the
value is not in that allowed set respond with a 400 Bad Request (with a clear
error message) instead of delegating to forEnergy—keep using req.tenantId!,
deps.taskService.forEnergy and deps.taskService.list but gate the call with the
whitelist check.
| this.updateKpi(tenantId, 'meetings-booked-week', q1.value), | ||
| this.updateKpi(tenantId, 'follow-up-latency-under-24h', q2.value), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check the seed data and schema to understand intended KPI semantics
# Find KPI definitions in seed/schema
rg -n 'follow-up-latency' --glob '*.sql'
rg -n 'follow_up' --glob '*.sql' -C2Repository: BrianCLong/summit
Length of output: 714
🏁 Script executed:
cat -n server/src/gtm/services/KpiService.ts | head -60Repository: BrianCLong/summit
Length of output: 2390
KPI identifier follow-up-latency-under-24h doesn't match query logic.
The query on lines 19-28 counts meetings with follow_up_due IS NOT NULL, next_step_confirmed = true, and positive outcomes, but doesn't actually verify whether follow-ups occurred within 24 hours of the meeting. The KPI name implies a latency measurement that isn't implemented. The query lacks a time-based comparison (e.g., follow_up_due - date_time <= interval '24 hours') or a field tracking when the follow-up actually occurred.
| async advanceStage(tenantId: string, opportunityId: string, nextStage: string, changedBy: string) { | ||
| const current = await this.one<any>( | ||
| `SELECT * FROM maestro.gtm_pipeline WHERE tenant_id = $1 AND id = $2`, | ||
| [tenantId, opportunityId], | ||
| ); | ||
|
|
||
| if (!ALLOWED_STAGE_TRANSITIONS[current.stage]?.includes(nextStage)) { | ||
| throw new Error(`Invalid stage transition: ${current.stage} -> ${nextStage}`); | ||
| } | ||
|
|
||
| const updated = await this.one<any>( | ||
| `UPDATE maestro.gtm_pipeline | ||
| SET stage = $3, updated_at = now() | ||
| WHERE tenant_id = $1 AND id = $2 | ||
| RETURNING *`, | ||
| [tenantId, opportunityId, nextStage], | ||
| ); |
There was a problem hiding this comment.
Make stage advancement atomic.
Line 69 reads the current stage and Line 78-Line 84 update it in a separate statement. Two concurrent requests can both validate against the same stale stage and then overwrite each other, so a transition that is invalid from the real current state can still land. Gate the UPDATE on the previously read stage or lock the row inside a transaction.
🔒 Proposed fix
async advanceStage(tenantId: string, opportunityId: string, nextStage: string, changedBy: string) {
const current = await this.one<any>(
`SELECT * FROM maestro.gtm_pipeline WHERE tenant_id = $1 AND id = $2`,
[tenantId, opportunityId],
);
if (!ALLOWED_STAGE_TRANSITIONS[current.stage]?.includes(nextStage)) {
throw new Error(`Invalid stage transition: ${current.stage} -> ${nextStage}`);
}
- const updated = await this.one<any>(
- `UPDATE maestro.gtm_pipeline
- SET stage = $3, updated_at = now()
- WHERE tenant_id = $1 AND id = $2
- RETURNING *`,
- [tenantId, opportunityId, nextStage],
- );
-
- return { updated, changedBy };
+ const result = await this.db.query(
+ `UPDATE maestro.gtm_pipeline
+ SET stage = $4, updated_at = now()
+ WHERE tenant_id = $1
+ AND id = $2
+ AND stage = $3
+ RETURNING *`,
+ [tenantId, opportunityId, current.stage, nextStage],
+ );
+ if (!result.rows[0]) {
+ throw new Error('Pipeline stage changed concurrently; retry the transition.');
+ }
+
+ return { updated: result.rows[0], changedBy };
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/gtm/services/PipelineService.ts` around lines 68 - 84, The
advanceStage function currently reads the current stage then updates it in
separate statements causing race conditions; make the operation atomic by
wrapping it in a transaction and either (a) SELECT ... FOR UPDATE the row (use
the same query that reads into `current` inside a transaction) then validate
`ALLOWED_STAGE_TRANSITIONS[current.stage]` and perform the UPDATE, or (b)
perform a conditional UPDATE that includes the previous stage in the WHERE
clause (e.g. WHERE tenant_id = $1 AND id = $2 AND stage = $4) and check affected
rows to detect a concurrent change; update the code paths around `advanceStage`,
`current`, `updated`, and the calls to `this.one` to use the chosen
transactional/conditional approach and throw the invalid-transition error when
the condition fails.
| CREATE TABLE IF NOT EXISTS maestro.gtm_kpis ( | ||
| id UUID PRIMARY KEY DEFAULT gen_random_uuid(), | ||
| tenant_id UUID NOT NULL, | ||
| kpi_name TEXT NOT NULL, | ||
| kpi_id TEXT NOT NULL, | ||
| category TEXT NOT NULL, | ||
| description TEXT, | ||
| unit TEXT NOT NULL, | ||
| cadence TEXT NOT NULL, | ||
| target_value NUMERIC, | ||
| current_value NUMERIC, | ||
| previous_value NUMERIC, | ||
| delta NUMERIC, | ||
| status TEXT NOT NULL, | ||
| owner TEXT, | ||
| last_updated TIMESTAMPTZ, | ||
| source_method TEXT, | ||
| notes TEXT, | ||
| created_at TIMESTAMPTZ NOT NULL DEFAULT now(), | ||
| updated_at TIMESTAMPTZ NOT NULL DEFAULT now(), | ||
| UNIQUE (tenant_id, kpi_id) | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if tenant_id indexes are created in subsequent migrations
rg -n 'CREATE.*INDEX.*tenant_id' --glob '*.sql'Repository: BrianCLong/summit
Length of output: 50373
🏁 Script executed:
cat server/src/gtm/sql/20260405_0001_gtm_core.up.sqlRepository: BrianCLong/summit
Length of output: 6425
🏁 Script executed:
cat server/src/gtm/sql/20260405_0002_gtm_views_and_indexes.up.sqlRepository: BrianCLong/summit
Length of output: 3529
Add tenant_id indexes to gtm_kpis and other tables in the follow-on migration.
The tables gtm_kpis, gtm_script_outcomes, gtm_founder_state, gtm_artifact_usage, and gtm_recovery_log all have tenant_id columns but lack indexes in 20260405_0002_gtm_views_and_indexes.up.sql. Multi-tenant queries will scan these tables fully without indexes. Add indexes on at least (tenant_id) for each, or composite indexes like (tenant_id, key_column) where filtering by additional columns is common.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/gtm/sql/20260405_0001_gtm_core.up.sql` around lines 8 - 29, Add
B-tree indexes for tenant_id on the listed GTM tables in the follow-on migration
so multi-tenant queries are not forced to full scans: add CREATE INDEX
statements for maestro.gtm_kpis (tenant_id), maestro.gtm_script_outcomes
(tenant_id), maestro.gtm_founder_state (tenant_id), maestro.gtm_artifact_usage
(tenant_id), and maestro.gtm_recovery_log (tenant_id) in
20260405_0002_gtm_views_and_indexes.up.sql; where queries commonly filter by
another column (e.g., kpi_id or script_id), prefer composite indexes like
(tenant_id, kpi_id) or (tenant_id, script_id) accordingly, and ensure matching
DROP INDEX statements are added to the corresponding .down migration.
| count(*) FILTER (WHERE section = 'task_blocked') AS tasks_blocked, | ||
| count(*) FILTER (WHERE section = 'meeting_next_7d') AS meetings_next_7d, | ||
| count(*) FILTER (WHERE section = 'opp_at_risk') AS opps_at_risk, | ||
| sum(weighted_value) FILTER (WHERE section = 'weighted_pipeline') AS weighted_pipeline, |
There was a problem hiding this comment.
Coalesce weighted_pipeline to keep the snapshot shape stable.
Line 28 returns NULL when a tenant has no pipeline rows, even if the rest of the snapshot exists. Returning 0 here keeps the dashboard metric consistently numeric.
🔢 Proposed fix
- sum(weighted_value) FILTER (WHERE section = 'weighted_pipeline') AS weighted_pipeline,
+ COALESCE(sum(weighted_value) FILTER (WHERE section = 'weighted_pipeline'), 0) AS weighted_pipeline,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sum(weighted_value) FILTER (WHERE section = 'weighted_pipeline') AS weighted_pipeline, | |
| COALESCE(sum(weighted_value) FILTER (WHERE section = 'weighted_pipeline'), 0) AS weighted_pipeline, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/gtm/sql/20260405_0002_gtm_views_and_indexes.up.sql` at line 28,
The summed weighted_pipeline expression can return NULL for tenants with no
pipeline rows; wrap the existing expression sum(weighted_value) FILTER (WHERE
section = 'weighted_pipeline') in COALESCE(..., 0) so the alias
weighted_pipeline always yields 0 instead of NULL, preserving the snapshot shape
and numeric dashboard metric.
| duration: z.string().min(1), | ||
| leverage: z.string().min(1), | ||
| status: z.string().min(1), |
There was a problem hiding this comment.
Constrain task fields that downstream queries already treat as enums.
Line 60-Line 62 accept arbitrary strings, but the rest of this PR only recognizes a fixed vocabulary here. TaskService.forEnergy, TaskService.recoveryCandidates, and the snapshot view all key off exact values like Ready, Backlog, In Progress, Blocked, and 15 min. A typo or casing drift will validate, insert, and then silently disappear from queues, recovery selection, or dashboard counts. Normalize these at the schema boundary or validate them against shared constants.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/gtm/validators/gtmSchemas.ts` around lines 60 - 62, The schema
currently allows arbitrary strings for duration, leverage, and status (in
gtmSchemas.ts) which leads to silent mismatches with downstream consumers;
update the zod schema for the fields duration, leverage, and status to validate
against the shared constant vocabularies (or explicit z.literal/z.union of
allowed values) used by TaskService.forEnergy and TaskService.recoveryCandidates
and the snapshot view, so only exact values like "Ready", "Backlog", "In
Progress", "Blocked", "15 min", etc. pass validation; reference the shared
constants/enums (or export them if missing) and use them in the zod definitions
to ensure normalized/case-sensitive inputs at the schema boundary.
d091722 to
8beacd7
Compare
8beacd7 to
2958979
Compare
This implements the full CompanyOS execution spine behind
/api/gtmwith:BaseServicePR created automatically by Jules for task 4472544655146173423 started by @BrianCLong
Summary by CodeRabbit
Release Notes
New Features
Chores