ci: switch providers-anthropic-to-codex-bridge from OPENAI_API_KEY to CODEX_OAUTH_TOKEN#626
ci: switch providers-anthropic-to-codex-bridge from OPENAI_API_KEY to CODEX_OAUTH_TOKEN#626
Conversation
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-opus-4-6 (Anthropic)
Model: claude-opus-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: APPROVE
The implementation correctly achieves the goal of switching CI from a paid OpenAI API key to an OAuth token. The changes are well-structured, consistent, and thoroughly tested.
What was done
-
CI workflow (
.github/workflows/main.yml):OPENAI_API_KEYreplaced withCODEX_OAUTH_TOKENfor theproviders-anthropic-to-codex-bridgetest shard. Clean 1-line change. -
Provider (
packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts):CODEX_OAUTH_TOKENadded as the highest-priority credential source in bothgetBridgeAuth()andbuildSdkConfig(). When the token is a valid JWT with achatgpt_account_idclaim, it createschatgpt-type auth; otherwise it falls back toapi_keyauth. Backward compatibility withOPENAI_API_KEYandCODEX_API_KEYis preserved. -
Unit tests: New tests cover priority ordering, JWT vs non-JWT fallback,
getAuthStatus(),buildSdkConfig(), andgetModels()with the new env var. -
Online test: Error message and doc comments updated to mention
CODEX_OAUTH_TOKEN. Hard-fail behavior preserved (no skip guards added).
Issues
P2: Duplicated CODEX_OAUTH_TOKEN-to-AppServerAuth logic
The JWT-parsing logic to convert CODEX_OAUTH_TOKEN into an AppServerAuth object is duplicated verbatim between getBridgeAuth() (lines 271-283) and buildSdkConfig() (lines 474-484). This creates a maintenance risk -- if the logic needs to change (e.g., extracting additional JWT claims), it must be updated in two places. A private helper like resolveEnvAuth(): AppServerAuth | undefined would eliminate this duplication.
However, the duplication exists because buildSdkConfig() is synchronous while getBridgeAuth() is async, and the comment at line 472 explains this design constraint. The existing pattern for OPENAI_API_KEY/CODEX_API_KEY already had the same duplication before this PR. So while worth noting, this is not a regression introduced by this PR.
P3: Stale comment in online test
packages/daemon/tests/online/providers/anthropic-to-codex-bridge-provider.test.ts line 216: the inline comment still says "(env vars OPENAI_API_KEY/CODEX_API_KEY, auth.json OAuth)" without mentioning CODEX_OAUTH_TOKEN, even though the error message on line 221 was updated. Minor inconsistency.
Verdict
The implementation is correct, complete, and well-tested. No bugs, no security issues, no missing edge cases. The duplicated logic is a pre-existing pattern, not a regression. Approving with minor nits noted.
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-sonnet-4-6 (Anthropic)
Model: claude-sonnet-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: REQUEST_CHANGES (2 P1, 2 P2, 2 P3 issues)
Summary
The implementation correctly achieves its goal — CODEX_OAUTH_TOKEN is wired up as the highest-priority credential source, the CI workflow switches from OPENAI_API_KEY to CODEX_OAUTH_TOKEN, and the new unit tests cover the happy path. However, there are two substantive concerns: a duplicated code block that will diverge over time, and a silent mis-categorization of non-ChatGPT JWTs.
P1: No caching of the CODEX_OAUTH_TOKEN auth path in getBridgeAuth()
File: packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts lines 271-284
When CODEX_OAUTH_TOKEN is set, getBridgeAuth() returns early without ever setting cachedBridgeAuth. By contrast, file-based credentials set both cachedBridgeAuth and cachedApiKey before returning.
The existing OPENAI_API_KEY path had the same omission (pre-existing), but this PR adds a second env var that exhibits it. The practical effect is that repeated calls to getBridgeAuth() (e.g., from isAvailable() followed by getAuthStatus()) re-parse the JWT every time rather than hitting the cache. This is a performance nit on its own, but the deeper concern is inconsistency: buildSdkConfig() checks this.cachedBridgeAuth at line 493 and only falls back to the env var re-check because the same block explicitly does it. If a future refactor removes the buildSdkConfig duplication without fixing caching, the env var path will silently stop working.
P1: extractAccountId() returns sub for non-OpenAI JWTs, causing chatgpt auth to be used for unintended tokens
File: packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts line 896
return auth?.chatgpt_account_id ?? (payload.sub as string | undefined);
Any JWT that lacks the https://api.openai.com/auth claim but has a sub field (which all conformant JWTs do) will have its sub returned as the chatgptAccountId. This means if someone mistakenly puts a non-ChatGPT token — e.g., a GitHub OAuth token or any other JWT — into CODEX_OAUTH_TOKEN, it will be classified as type: chatgpt auth and sent to the Codex app-server as an OpenAI OAuth access token rather than falling back to api_key. The failure mode will be a confusing 401 or auth error from the Codex bridge rather than a clear credential-type mismatch.
This is pre-existing behavior for the toBridgeAuth() path (file-based credentials), but the PR now makes it reachable through the env var path too. At a minimum, the sub fallback should be removed from the CODEX_OAUTH_TOKEN-specific path in getBridgeAuth(), or a test covering the sub-only JWT case should be added with explicit assertion about which auth type is used.
P2: JWT parsing / AppServerAuth construction is duplicated between getBridgeAuth() and buildSdkConfig()
Files:
packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.tslines 271-284 (getBridgeAuth)packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.tslines 474-484 (buildSdkConfig)
The two blocks are identical — read CODEX_OAUTH_TOKEN, call extractAccountId, conditionally return chatgpt or api_key auth. This duplication exists because buildSdkConfig() is synchronous and cannot await getBridgeAuth(). The comment at line 471 acknowledges this.
While the current code is correct, this is a maintenance hazard. If the CODEX_OAUTH_TOKEN auth construction logic changes (e.g., a new claim is added to extract), both sites must be updated in sync. A private synchronous helper like buildAuthFromEnvToken(token: string): AppServerAuth that both sites call would eliminate the duplication.
P2: buildSdkConfig() unit test verifies URL shape but not auth type
File: packages/daemon/tests/unit/providers/anthropic-to-codex-bridge-provider.test.ts lines 339-348
The test passes a valid JWT with chatgpt_account_id but only asserts that the bridge server starts and returns a URL. It does not verify that the bridge server was started with type: chatgpt auth (vs. type: api_key). The whole point of the PR is to use the cheaper OAuth path, so a test that does not differentiate between the two auth types does not protect against a regression where the token is silently treated as an API key.
If createBridgeServer can be spied on, the test should assert the auth type passed to it.
P3: Non-JWT token in CODEX_OAUTH_TOKEN is silently accepted as api_key without a warning log
File: packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts lines 281-283
The fallback return { type: 'api_key', apiKey: token } is intentional and tested. However, no logger.warn is emitted when this fallback triggers. If a CI operator accidentally puts a plain bearer token in CODEX_OAUTH_TOKEN expecting OAuth behavior, it will silently be sent as an API key to the Codex bridge. A warn log here would make the mis-configuration visible in CI logs.
P3: Unit test description for Priority 0 is narrower than what the test exercises
File: packages/daemon/tests/unit/providers/anthropic-to-codex-bridge-provider.test.ts line 201
The test name says "takes highest priority (over OPENAI_API_KEY)" but the test setup also writes NeoKai file-based credentials (neokai-token). The test is sound — it correctly returns fakeToken — but the description should say "takes highest priority (over OPENAI_API_KEY and file-based auth)" to match what is actually being exercised.
What is correct
- The CI workflow change is a clean single-line swap with the correct conditional expression matching the existing pattern for other module-specific secrets.
- The online test error message is updated to reflect the new preferred credential.
getAuthStatus()test correctly verifies that env-var-basedCODEX_OAUTH_TOKENdoes not affect the UI auth status (env vars are daemon/test only).getModels()test correctly verifies that the model list is populated whenCODEX_OAUTH_TOKENis present (usesisAvailable(), notgetAuthStatus()).makeFakeJwt()helper is a clean addition shared across multiple test cases.
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-sonnet-4-6 (Anthropic)
Model: claude-sonnet-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: REQUEST_CHANGES — one concrete bug (double warning emission) and a missing test for it.
Summary
The overall direction is solid. The helper extraction is clean, the sub-only fallback removal is correct, and the test coverage for buildAuthFromEnvOAuthToken() is thorough. However, there is one clear runtime bug introduced by the refactor: buildSdkConfig() re-invokes buildAuthFromEnvOAuthToken() independently instead of using the freshly populated cachedBridgeAuth, causing the warning to fire twice on every new workspace when the token is not a valid ChatGPT JWT.
P1 — Double warning emission in buildSdkConfig()
File: packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts, lines 524–526
buildSdkConfig() is synchronous and cannot call the async getBridgeAuth(), so it independently re-invokes buildAuthFromEnvOAuthToken():
if (this.env.CODEX_OAUTH_TOKEN) {
envAuth = buildAuthFromEnvOAuthToken(this.env.CODEX_OAUTH_TOKEN, (msg) => logger.warn(msg));
}The normal call sequence is: isAvailable() → getBridgeAuth() (parses token, warns, caches into cachedBridgeAuth) → then buildSdkConfig() for each new workspace (re-parses, warns again). For any non-ChatGPT JWT — the very token type that triggers the warning — the operator sees CODEX_OAUTH_TOKEN does not contain a ChatGPT account ID; treating as API key once per workspace created. For a CI job that exercises multiple workspace paths, this can produce many redundant warnings, making the diagnostic harder to spot.
The fix is simple: in buildSdkConfig(), prefer the already-populated cachedBridgeAuth when CODEX_OAUTH_TOKEN is set:
let envAuth: AppServerAuth | undefined;
if (this.env.CODEX_OAUTH_TOKEN) {
// Prefer the already-parsed result from getBridgeAuth() to avoid re-parsing
// (and re-emitting warnings) on every workspace.
envAuth = this.cachedBridgeAuth ?? buildAuthFromEnvOAuthToken(this.env.CODEX_OAUTH_TOKEN, (msg) => logger.warn(msg));
} else if (this.env.OPENAI_API_KEY) { ...Note that cachedBridgeAuth is AppServerAuth | null | undefined. A null value means "resolved but unavailable", so the ?? buildAuth... fallback correctly handles the case where getBridgeAuth() hasn't been called yet.
P1 — Missing test for the double-warning bug path
There is no test asserting that when getBridgeAuth()/getApiKey() has already been called (warming the cache), a subsequent call to buildSdkConfig() does NOT re-emit the warning for the non-ChatGPT-JWT path. The existing test buildSdkConfig() uses CODEX_OAUTH_TOKEN env var as OAuth auth only exercises the happy-path (ChatGPT token, no warning). A test like the following would have caught this:
it('buildSdkConfig() does not double-emit warning after getBridgeAuth warms the cache', async () => {
const nonChatgptToken = makeFakeJwt({ sub: 'some-user' });
const warnings: string[] = [];
// override logger.warn temporarily
...
const p = makeProvider({ CODEX_OAUTH_TOKEN: nonChatgptToken });
await p.getApiKey(); // triggers getBridgeAuth, caches, emits warning #1
p.buildSdkConfig('gpt-5.3-codex', { workspacePath: '/tmp/double-warn-ws' });
expect(warnings.length).toBe(1); // not 2
p.stopAllBridgeServers();
});P2 — Caching asymmetry: OPENAI_API_KEY / CODEX_API_KEY still bypass the cache
Lines 336–341 in getBridgeAuth() return immediately without setting cachedBridgeAuth:
if (this.env.OPENAI_API_KEY) {
return { type: 'api_key', apiKey: this.env.OPENAI_API_KEY };
}The PR comment at line 330 explains that caching is added for CODEX_OAUTH_TOKEN specifically "so repeated calls don't re-parse." This asymmetry is intentional (env-var API keys are trivially cheap to re-read), but the same asymmetry means buildSdkConfig() at line 535 (auth = envAuth ?? this.cachedBridgeAuth ?? fileAuth) will always resolve via envAuth for both key types, making the this.cachedBridgeAuth branch unreachable for any env-var credential path. This is not a bug but worth a brief comment to avoid future confusion.
P3 — Nit: (auth as { accessToken: string }).accessToken cast in getBridgeAuth()
Line 333:
this.cachedApiKey =
auth.type === 'api_key' ? auth.apiKey : (auth as { accessToken: string }).accessToken;TypeScript already narrows auth to { type: 'chatgpt'; accessToken: string; ... } after the auth.type === 'api_key' check, so the cast is unnecessary. Consider (auth as Extract<AppServerAuth, { type: 'chatgpt' }>).accessToken or simply restructure to use a type guard.
What is correct
- CI YAML change: simple, correct, hard-fail behaviour is preserved.
buildAuthFromEnvOAuthToken()logic:chatgpt_account_id-only detection is the right guard.parseJwtPayload()implementation: proper 3-part check, catches malformed tokens.getBridgeAuth()caching forCODEX_OAUTH_TOKEN: correct and the comment explains why.buildAuthFromEnvOAuthToken()unit tests: comprehensive and well-structured.- Online test comment updates: accurate.
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-opus-4-6 (Anthropic)
Model: claude-opus-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: REQUEST_CHANGES (self-PR, posting as COMMENT)
Overall this is a clean, well-structured PR. The shared buildAuthFromEnvOAuthToken() helper eliminates duplication, the sub claim fallback removal is correct and well-documented, caching in getBridgeAuth() is sensible, and test coverage is thorough with good edge cases. The CI workflow change is straightforward.
P1: Missing CODEX_OAUTH_TOKEN in unit test setup
File: packages/daemon/tests/unit/setup.ts (line ~48)
The unit test setup clears all provider API keys to prevent real API calls:
process.env.OPENAI_API_KEY = '';
process.env.CODEX_API_KEY = '';
But CODEX_OAUTH_TOKEN is not cleared here. While the bridge provider's own unit tests are safe (they use isolated env objects via makeProvider()), any future test or code path that reads process.env.CODEX_OAUTH_TOKEN directly could accidentally pick up a real token from a developer's environment and make real API calls -- exactly what this setup file is designed to prevent.
Add:
process.env.CODEX_OAUTH_TOKEN = '';alongside the other env var clears.
P3: Duplicate JWT parsing logic
File: packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts
The new module-level parseJwtPayload() (line 128) and the existing instance method parseTokenPayload() (line 921) are functionally identical. Not introduced by this PR (the instance method is pre-existing), but worth noting for a follow-up cleanup -- the instance method callers (extractAccountId, extractPlanType) could be refactored to use the shared module-level function.
Everything else looks good
buildAuthFromEnvOAuthToken()correctly only produceschatgpt-type auth when the OpenAI-specificchatgpt_account_idclaim is present, with a clear warning on fallback- The priority order (CODEX_OAUTH_TOKEN > OPENAI_API_KEY > CODEX_API_KEY > file-based) is consistent between
getBridgeAuth()andbuildSdkConfig() - Caching in
getBridgeAuth()correctly handles bothapi_keyandchatgptauth types via the type guard - Tests cover: chatgpt auth detection, sub-only JWT rejection, non-JWT fallback, warning emission, priority ordering, and model listing
- CI workflow correctly swaps
OPENAI_API_KEYforCODEX_OAUTH_TOKENwith matching secret name - Comments and doc blocks are accurate and up-to-date
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-sonnet-4-6 (Anthropic)
Model: claude-sonnet-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: REQUEST_CHANGES — one P1 bug remains in the cache-deduplication logic.
P1 — Cache does not actually suppress re-parsing or re-warning in getBridgeAuth()
File: packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts, lines 326–336
The CODEX_OAUTH_TOKEN branch in getBridgeAuth() runs before the cachedBridgeAuth !== undefined guard at line 345. This means buildAuthFromEnvOAuthToken() is called (and the warning is emitted) on every single invocation of getBridgeAuth(), regardless of whether the cache was already populated by a prior call.
The comment at line 330 ("Cache so repeated calls don't re-parse") is therefore inaccurate: the value written to this.cachedBridgeAuth at line 331 is never read back in the CODEX_OAUTH_TOKEN code path — the guard that would use it (line 345) is only reachable when CODEX_OAUTH_TOKEN is falsy.
Concrete path that re-emits the warning for a non-ChatGPT JWT:
isAvailable()→getBridgeAuth()→ parses JWT, emits warning, setscachedBridgeAuthgetApiKey()→getBridgeAuth()→ again parses JWT, again emits warning (cache ignored)getModels()→isAvailable()→getBridgeAuth()→ same again
The test added in this iteration (buildSdkConfig() does not re-emit warning after getBridgeAuth() warms the cache) only covers the getApiKey() → buildSdkConfig() sequence, and it passes because buildSdkConfig() correctly checks cachedBridgeAuth. However the test does not cover calling getBridgeAuth() twice through different methods, so it does not catch this bug.
The fix is straightforward: add a cache-hit shortcut inside the CODEX_OAUTH_TOKEN branch before calling buildAuthFromEnvOAuthToken:
if (this.env.CODEX_OAUTH_TOKEN) {
// Return cached result to avoid re-parsing the JWT and re-emitting warning.
if (this.cachedBridgeAuth !== undefined) {
return this.cachedBridgeAuth ?? undefined;
}
const auth = buildAuthFromEnvOAuthToken(this.env.CODEX_OAUTH_TOKEN, (msg) =>
logger.warn(msg)
);
this.cachedBridgeAuth = auth;
this.cachedApiKey = auth.type === 'api_key'
? auth.apiKey
: (auth as Extract<AppServerAuth, { type: 'chatgpt' }>).accessToken;
return auth;
}And the corresponding test should be strengthened to call getApiKey() (or isAvailable()) twice and assert only one warning was emitted across both calls.
P2 — cachedBridgeAuth ?? buildAuthFromEnvOAuthToken(...) in buildSdkConfig() treats null as a cache miss
File: packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts, line 531
cachedBridgeAuth is typed AppServerAuth | null | undefined where null means "resolved but no auth available". The ?? null-coalescing operator treats both null and undefined as falling through, so if cachedBridgeAuth === null (not currently reachable while CODEX_OAUTH_TOKEN is set, but possible via reset at line 759), buildSdkConfig() would re-parse and re-warn. This is a latent correctness issue. Consider this.cachedBridgeAuth !== undefined ? this.cachedBridgeAuth ?? undefined : buildAuthFromEnvOAuthToken(...) or restructuring to use undefined as the sole sentinel.
What's good
- CI change is minimal and correct:
OPENAI_API_KEY→CODEX_OAUTH_TOKENfor theproviders-anthropic-to-codex-bridgematrix module only. buildAuthFromEnvOAuthToken()is well-designed: thechatgpt_account_id-only check prevents mis-classifying non-OpenAI JWTs (e.g., GitHub tokens), the fallback-to-api_keyis the right safe default, and the function is properly exported for unit testing.Extract<AppServerAuth, { type: 'chatgpt' }>replaces the raw cast from the previous iteration — correct.setup.tsnow clearsCODEX_OAUTH_TOKEN, closing the env-isolation gap.buildAuthFromEnvOAuthToken()unit tests are thorough: cover chatgpt path, sub-only JWT, plain non-JWT string, warning emission, and warning suppression.parseJwtPayload()defensively handles invalid base64 and non-3-part tokens with a try/catch.- Online test hard-fail message is updated to mention
CODEX_OAUTH_TOKEN.
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-opus-4-6 (Anthropic)
Model: claude-opus-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: APPROVE
The iteration 3 fixes address all previously raised issues correctly. The implementation is clean, well-tested, and achieves the stated goal.
What was reviewed
-
CI workflow change (
.github/workflows/main.yml): Simple, correct swap fromOPENAI_API_KEYtoCODEX_OAUTH_TOKENscoped to theproviders-anthropic-to-codex-bridgemodule only. -
buildAuthFromEnvOAuthToken()helper (packages/daemon/src/lib/providers/anthropic-to-codex-bridge-provider.ts:154-176): Well-structured extraction. Correctly requires the OpenAI-specificchatgpt_account_idclaim rather than falling back to genericsub, preventing mis-classification of non-OpenAI JWTs. The injectablewarnFnparameter is a good pattern for testability. -
getBridgeAuth()CODEX_OAUTH_TOKEN branch (lines 326-337): Properly placed at priority 0 (before OPENAI_API_KEY). Caches bothcachedBridgeAuthandcachedApiKeyto avoid repeated parsing. TheExtract<AppServerAuth, {type: 'chatgpt'}>cast on line 335 is the correct type-narrowing pattern replacing the previous raw cast. -
buildSdkConfig()cache-first approach (lines 527-532): Thethis.cachedBridgeAuth ?? buildAuthFromEnvOAuthToken(...)pattern correctly avoids double JWT parsing and double warning emission. The fallback to direct parsing handles the edge case wherebuildSdkConfig()is called before any async discovery method. -
Unit test setup (
packages/daemon/tests/unit/setup.ts:49):CODEX_OAUTH_TOKENis properly cleared alongside all other credential env vars. -
Test coverage: Comprehensive tests for the new functionality:
buildAuthFromEnvOAuthToken()direct tests: chatgpt auth, sub-only JWT, plain string, warning emission, no-warning on success- Priority 0 test verifying CODEX_OAUTH_TOKEN wins over OPENAI_API_KEY and file auth
- Non-JWT and sub-only JWT fallback paths
getAuthStatus()returnsisAuthenticated=falsefor env-var-only auth (correct, since env vars are daemon-only)- Double-warning prevention test using
Logger.prototype.warnspy buildSdkConfig()integration with CODEX_OAUTH_TOKEN
Minor observations (P3)
-
Line 159: The line
const openaiAuth = payload?.['https://api.openai.com/auth'] as Record<string, string> | undefined;exceeds 100 chars (the project's configured line width), though the formatter likely accepts this since it is a single expression. -
parseJwtPayloadis not exported — this is the right call since it is an implementation detail, but if more JWT parsing is needed elsewhere in the future, it could be promoted.
No P0, P1, or P2 issues found. The implementation is correct, complete, and well-covered by tests.
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-opus-4-6 (Anthropic)
Model: claude-opus-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: APPROVE
This is the fourth iteration of the PR, and the implementation is now clean and correct. The three issues from the previous review have all been properly addressed:
-
Early cache-hit return in
getBridgeAuth()(line 329-331): TheCODEX_OAUTH_TOKENbranch now checksthis.cachedBridgeAuth !== undefinedbefore callingbuildAuthFromEnvOAuthToken(), correctly short-circuiting on cached results. The?? undefinedcoercion properly convertsnull(resolved-but-unavailable) back toundefinedfor the caller. -
!== undefinedcheck inbuildSdkConfig()(line 537): Using!== undefinedinstead of??is correct here. With??, anullcached value (meaning "resolved but unavailable") would fall through becausenull ?? ...evaluates the RHS. The explicit!== undefinedtreats bothnulland non-nullAppServerAuthas cache hits, preventing re-parsing. -
Strengthened warning-dedup test (lines 373-399 in the test file): The test now covers
isAvailable(),getApiKey(),getModels(), andbuildSdkConfig()in sequence, asserting exactly one warning across all paths. This is a thorough exercise of the caching invariant.
Implementation Quality
-
buildAuthFromEnvOAuthToken()helper: Well-designed — pure function with injectable warning callback, properly exported for unit testing, correct JWT claim inspection logic (only trustschatgpt_account_id, not genericsub). -
Caching semantics: The three-state cache (
undefined= unresolved,null= resolved-unavailable,AppServerAuth= resolved-available) is used consistently acrossgetBridgeAuth()andbuildSdkConfig(). -
Test coverage: Comprehensive — covers ChatGPT JWT, sub-only JWT, non-JWT string, warning emission/suppression, priority ordering, cache deduplication,
getModels()availability,getAuthStatus()exclusion, andbuildSdkConfig()integration. -
CI change: Straightforward env var rename in
.github/workflows/main.yml. -
Unit test setup:
CODEX_OAUTH_TOKENcorrectly cleared inpackages/daemon/tests/unit/setup.ts.
Minor Observations (P3)
buildSdkConfig()line 548:const auth = envAuth ?? this.cachedBridgeAuth ?? fileAuth ?? undefined;— the trailing?? undefinedis redundant sinceundefinedis already the default when all preceding terms are nullish. Not a bug, just cosmetic.
No blocking issues found. The implementation is correct, well-tested, and ready to merge.
lsm
left a comment
There was a problem hiding this comment.
🤖 Review by claude-sonnet-4-6 (Anthropic)
Model: claude-sonnet-4-6 | Client: NeoKai | Provider: Anthropic
Recommendation: APPROVE — Round 4 fixes are correct. No blocking issues remain.
Summary
The fourth-iteration fixes correctly address the two caching concerns raised in round 3. The getBridgeAuth() early-return shortcut prevents repeated JWT parsing and warning re-emission, and the !== undefined guard in buildSdkConfig() correctly handles the null cache sentinel. The strengthened double-warning test covers all four callers in sequence.
What was verified
Cache-hit shortcut in getBridgeAuth() (lines 329–331 of the provider)
The early return at if (this.cachedBridgeAuth !== undefined) { return this.cachedBridgeAuth ?? undefined; } fires before buildAuthFromEnvOAuthToken() on repeated calls. Because buildAuthFromEnvOAuthToken() always returns a non-null AppServerAuth (either chatgpt or api_key), cachedBridgeAuth is set to a concrete auth object on the first call and all subsequent callers get the cached result with no re-parse and no re-warn. Correct.
!== undefined check in buildSdkConfig() (lines 537–539)
The comment and the code are technically sound. In practice, cachedBridgeAuth cannot be null while CODEX_OAUTH_TOKEN is truthy — the null sentinel is only written at line 370, which is unreachable when the token branch is taken. However using !== undefined rather than ??/truthy is the right defensive pattern given how the three-state sentinel is documented throughout the class (undefined = unresolved, null = resolved-unavailable, object = resolved). No issue here.
Double-warning test
The test at line 370 creates a fresh provider with a sub-only JWT and calls isAvailable(), getApiKey(), getModels(), and buildSdkConfig() in sequence. It filters warning calls that contain 'CODEX_OAUTH_TOKEN' and asserts exactly one fires. This correctly exercises the full codepath including the synchronous buildSdkConfig() fallback path. The spy is scoped to Logger.prototype.warn which is what logger.warn() in the module resolves to. The finally { warnSpy.mockRestore() } cleanup prevents leakage. Well done.
Extract<AppServerAuth, { type: 'chatgpt' }> cast (line 339)
The typed cast narrows correctly, although a simple (auth as { accessToken: string }).accessToken would be marginally safer if the AppServerAuth discriminated union ever grows a third type with an accessToken field. Not a blocking concern given current types.
Remaining observations (non-blocking)
P3 — Stale comment in buildSdkConfig()
The comment at line 533 says "resolved, unavailable" for the null case, which is accurate in the class-wide sense but may confuse future readers because cachedBridgeAuth is provably non-null when CODEX_OAUTH_TOKEN is set (the branch that sets null is mutually exclusive with the token being present). A brief clarifying parenthetical such as (null is set only via the file-based path; treated here as a safety net) would prevent confusion, but it is purely cosmetic.
P3 — Test buildSdkConfig() uses CODEX_OAUTH_TOKEN env var as OAuth auth (line 351)
This test calls buildAuthFromEnvOAuthToken(fakeToken) directly to assert auth.type === 'chatgpt' and then separately creates a provider. The first assertion is testing the helper in isolation (already covered in the dedicated buildAuthFromEnvOAuthToken() suite), while the provider assertion only checks that the bridge server URL is well-formed — it does not verify that the bridge server was actually started with chatgpt-type auth. This is a coverage gap rather than a bug: the test confirms availability but not correct auth propagation. Acceptable for a unit test that intentionally avoids inspecting internal bridge-server state, but worth noting for a future integration test.
P3 — Online test comment (line 8 in the online test file)
The updated requirements line lists CODEX_OAUTH_TOKEN (OAuth token, preferred), OPENAI_API_KEY, or CODEX_API_KEY — this is accurate and matches the updated CI workflow. No issue.
CI change (.github/workflows/main.yml)
The one-line swap from OPENAI_API_KEY to CODEX_OAUTH_TOKEN is correct. The hard-fail policy (non-empty secret required, no skip guard) is preserved. The secret name CODEX_OAUTH_TOKEN must be provisioned in the repo's CI secrets before merging, per the CLAUDE.md hard-fail rule — but that is an operational step, not a code defect.
Overall: The implementation is clean, the caching logic is correct, the tests are meaningful, and all round-3 feedback has been properly addressed. Ready to merge once CODEX_OAUTH_TOKEN is provisioned in CI secrets.
… CODEX_OAUTH_TOKEN Replace the OpenAI API key (paid, per-token billing) with an OAuth token in CI for the codex bridge online tests. Add CODEX_OAUTH_TOKEN env var support to AnthropicToCodexBridgeProvider as the highest-priority credential source, using chatgpt-type (OAuth) auth when the token is a valid JWT and falling back to api_key auth otherwise.
- Extract buildAuthFromEnvOAuthToken() as exported module-level helper eliminating duplicated JWT-parsing logic between getBridgeAuth() and buildSdkConfig() - Only produce chatgpt-type auth when OpenAI-specific chatgpt_account_id claim is present; do NOT fall back to sub to avoid mis-classifying non-ChatGPT JWTs (e.g. GitHub tokens) as chatgpt auth - Add logger.warn when CODEX_OAUTH_TOKEN falls back to api_key auth so CI operators can spot mis-configuration - Cache CODEX_OAUTH_TOKEN auth result in getBridgeAuth() for consistency with file-based credential paths - Add dedicated buildAuthFromEnvOAuthToken() tests covering chatgpt/api_key selection, sub-only JWT, and warning emission - Fix test description: "over OPENAI_API_KEY and file-based auth" - Fix stale inline comment in online test to mention CODEX_OAUTH_TOKEN
- Clear CODEX_OAUTH_TOKEN in unit test setup to prevent real API calls
from a developer's environment leaking into unit tests
- Prefer cachedBridgeAuth in buildSdkConfig() when CODEX_OAUTH_TOKEN is
set, avoiding duplicate JWT parsing and double warning emission per
workspace
- Add test asserting warning fires exactly once after cache is warmed
- Use Extract<AppServerAuth, {type:'chatgpt'}> cast instead of raw cast
for the cachedApiKey assignment in getBridgeAuth()
- Add cache-hit shortcut inside the CODEX_OAUTH_TOKEN branch of getBridgeAuth() so repeated callers (isAvailable, getApiKey, getModels) return the cached result without re-parsing the JWT or re-emitting the api_key fallback warning - Replace ?? with !== undefined check in buildSdkConfig() so null (resolved-but-unavailable) is treated as a definitive cache hit and does not fall through to re-parse the token - Strengthen warning-dedup test to cover isAvailable/getApiKey/getModels → buildSdkConfig() call sequence, ensuring exactly one warning fires across all getBridgeAuth() callers
cd64a6f to
5ac83db
Compare
Add a comment block in main.yml near the CODEX_OAUTH_TOKEN secret usage explaining how to extract the token from ~/.codex/auth.json after `codex login`, and noting that OAuth tokens are short-lived (suggest periodic rotation or using CODEX_API_KEY as a non-expiring alternative). Also add a Codex bridge section to .env.example documenting all four credential sources (CODEX_OAUTH_TOKEN, OPENAI_API_KEY, CODEX_API_KEY, file-based), with step-by-step instructions for obtaining the OAuth token.
Replace the OpenAI API key (paid, per-token billing) with an OAuth token in CI
for the codex bridge online tests. Add CODEX_OAUTH_TOKEN env var support to
AnthropicToCodexBridgeProvider as the highest-priority credential source, using
chatgpt-type (OAuth) auth when the token is a valid JWT and falling back to
api_key auth otherwise.