Conversation
🦋 Changeset detectedLatest commit: f9dc301 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds GCP KMS integration and runtime envs, replaces default keeper with a named Changes
Sequence DiagramsequenceDiagram
participant Client
participant PersonaHook as Persona Hook
participant RiskSvc as Risk Assessment
participant Allower as Firewall/Allower
participant GCP as GCP KMS
participant Chain as Chain / keeper
Client->>PersonaHook: POST inquiry
PersonaHook->>RiskSvc: perform risk assessment
RiskSvc-->>PersonaHook: pass/fail
alt pass
PersonaHook->>PersonaHook: derive account
PersonaHook->>Allower: getAllower()
Allower->>GCP: init credentials / request KMS-backed account
GCP-->>Allower: LocalAccount / credentials
Allower-->>PersonaHook: allower client ready
PersonaHook->>Allower: allow(account)
Allower-->>PersonaHook: allow result
PersonaHook->>Chain: keeper.poke(account, { ignore: [...] })
Chain-->>PersonaHook: tx hash / result
PersonaHook-->>Client: 200 success
else fail
PersonaHook-->>Client: rejection
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @aguxez, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the server's account management and security by integrating Google Cloud KMS for a new Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #839 +/- ##
=======================================
Coverage 71.69% 71.70%
=======================================
Files 228 229 +1
Lines 8277 8311 +34
Branches 2661 2668 +7
=======================================
+ Hits 5934 5959 +25
- Misses 2113 2116 +3
- Partials 230 236 +6
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
c642d46 to
88aa6aa
Compare
4f4d674 to
1216f5f
Compare
9f8058b to
3b5805c
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 700b17194b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| kms("allower") | ||
| .then((client) => | ||
| client.exaSend( | ||
| { name: "firewall.allow", op: "exa.firewall", attributes: { account: account.output } }, | ||
| { address, functionName: "allow", args: [account.output, true], abi: firewallAbi }, |
There was a problem hiding this comment.
Make firewall allow retryable on transient KMS/RPC failures
This kms("allower") call is fire-and-forget and its failures are only captured, while the handler still continues to createUser and returns HTTP 200. I checked the server code paths for firewall.allow, and this is the only place it is triggered, so a transient KMS/RPC failure here leaves a KYC-approved account permanently unallowed (later flows will keep encountering NotAllowed) with no automatic recovery path. Please make this side effect durable/retryable (or fail the webhook when allow cannot be scheduled).
Useful? React with 👍 / 👎.
| if (firewallAddress) { | ||
| const account = safeParse(Address, credential.account); | ||
| if (account.success) { | ||
| const address = firewallAddress; | ||
| kms("allower") | ||
| .then((client) => | ||
| client.exaSend( | ||
| { name: "firewall.allow", op: "exa.firewall", attributes: { account: account.output } }, | ||
| { address, functionName: "allow", args: [account.output, true], abi: firewallAbi }, | ||
| { ignore: [`AlreadyAllowed(${account.output})`] }, | ||
| ), | ||
| ) | ||
| .catch((error: unknown) => captureException(error, { level: "error" })); | ||
| } | ||
| } |
There was a problem hiding this comment.
🚩 Fire-and-forget firewall allow becomes non-retriable after pandaId is set
The firewall allow call at server/hooks/persona.ts:309-317 is fire-and-forget (errors caught and sent to Sentry). If this call fails but the subsequent createUser at line 322 and DB update at line 332 both succeed, the pandaId guard at line 248 will cause any Persona webhook retry to return "already created" without re-attempting the firewall allow. This means a failed firewall allow is permanently lost and requires manual intervention. The team explicitly tests this behavior ("captures allower init failure without blocking panda creation" and "captures exaSend failure without blocking panda creation"), so this appears intentional. However, if the firewall is a hard requirement for the account to function, this could leave users in a broken state after KYC approval.
Was this helpful? React with 👍 or 👎 to provide feedback.
| client.exaSend( | ||
| { name: "firewall.allow", op: "exa.firewall", attributes: { account: account.output } }, | ||
| { address, functionName: "allow", args: [account.output, true], abi: firewallAbi }, | ||
| { ignore: [`AlreadyAllowed(${account.output})`] }, |
There was a problem hiding this comment.
Bug: The error handling logic compares a checksummed address (account.output) with a potentially non-checksummed address from a revert reason, which may cause the comparison to fail.
Severity: MEDIUM
Suggested Fix
Ensure both addresses in the comparison are in the same format. Either apply checksumAddress to the address extracted from the error's cause.data.args before comparing it, or convert account.output to a non-checksummed format for the check.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: server/hooks/persona.ts#L314
Potential issue: In `persona.ts`, the error handling logic checks if a revert reason
string includes a user's account address. The address used for this check,
`account.output`, is checksummed. However, the revert reason is constructed by directly
converting error arguments to strings using `cause.data?.args?.map(String)`, which
likely does not apply checksumming. This mismatch between a checksummed address and a
non-checksummed address in the error string will cause the check to fail, preventing
certain errors from being correctly ignored.
Did we get this right? 👍 / 👎 to inform future reviews.
co-authored-by: Miguel Diaz <github.com.hf06j@slmail.me>
co-authored-by: Miguel Diaz <github.com.hf06j@slmail.me>
| export async function kms(key: string) { | ||
| const account = await gcpHsmToAccount({ | ||
| hsmKeyVersion: `projects/${projectId}/locations/us-west2/keyRings/${keyRing}/cryptoKeys/${key}/cryptoKeyVersions/${version}`, | ||
| kmsClient: new KeyManagementServiceClient({ keyFilename: await setupCredentials() }), | ||
| }); | ||
| account.nonceManager = nonceManager; | ||
| return extender( | ||
| createWalletClient({ | ||
| chain, | ||
| transport: http(`${chain.rpcUrls.alchemy.http[0]}/${alchemyAPIKey}`, { | ||
| batch: true, | ||
| async onFetchRequest(request) { | ||
| captureRequests(parse(Requests, await request.json())); | ||
| }, | ||
| }), | ||
| account, | ||
| }), | ||
| ); | ||
| } |
There was a problem hiding this comment.
🟡 kms() creates a new KMS client and fetches public key on every invocation — no caching
Every call to kms() at server/utils/gcp.ts:49-67 creates a brand new KeyManagementServiceClient (establishing a gRPC connection) and calls gcpHsmToAccount (which makes a KMS API call to retrieve the public key). Since kms("allower") is called on every KYC approval in server/hooks/persona.ts:309, this means each approval triggers:
- A new gRPC connection to GCP KMS
- An API call to fetch the same immutable public key
- A new
WalletClientwith a new HTTP transport
The public key for a given KMS key version never changes, so the resolved account and client could be cached (e.g., in a Map<string, Promise<...>>). Under load, the current implementation could exhaust GCP KMS API quotas and add unnecessary latency to the firewall allow operation.
Prompt for agents
The kms() function in server/utils/gcp.ts creates a new KeyManagementServiceClient, calls gcpHsmToAccount (an API call to fetch the public key), and creates a new WalletClient on every invocation. Since the public key for a given KMS key version is immutable, the result can be cached.
A simple approach: maintain a module-level Map<string, Promise<ReturnType<typeof extender>>> keyed by the key name (e.g. 'allower'). On first call for a given key, store the promise. On subsequent calls, return the cached promise. Add error handling to clear the cache entry if the promise rejects (similar to the pending pattern already used in setupCredentials()).
This avoids repeated gRPC connection setup and KMS API calls for the same key.
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores
Greptile Summary
This PR integrates Google Cloud KMS (via
@google-cloud/kmsand@valora/viem-account-hsm-gcp) to sign firewallallowtransactions with an HSM-backed key, and adds a newkeeper.pokeutility that proactively pokes account assets (ETH, WETH, ERC-20) after KYC approval and on activity webhooks.Key changes:
server/utils/gcp.ts— new module handles credential bootstrapping (triple base64 decode →/tmp/gcp-service-account.json), lazy initialization with promise caching, and retryable KMS error classification.server/utils/accounts.ts— addsgetAccount()(GCP KMS → viemLocalAccount),allower()(wallet client that wrapsallowon the Firewall contract), andpoke()(scans balances and pokes every non-zero asset viaexaSend).server/hooks/persona.ts— after KYC approval callsallowthen fire-and-forgetspoke;allowerPromisecaches the expensive KMS init across requests.server/hooks/activity.ts— callspokeafter account deployment, withNotAllowedin the ignore list.Verified findings:
delayfunction inpoke'swithRetryuses bitwise left-shift which overflows atcount >= 31(currently safe withretryCount: 10, but a footgun if the parameter is raised).DECODING_ITERATIONS = 3constant lacks documentation explaining the triple base64 encoding strategy.keeperclient'sonFetchRequesthook lacks error handling, unlike theallowerversion, making it susceptible to unhandled exceptions on malformed RPC payloads.Confidence Score: 3/5
retryCountis raised, (2) the triple base64 decoding strategy lacks documentation creating fragility, and (3) the missing try/catch inkeeper'sonFetchRequestcreates an inconsistency withallowerthat could surface unhandled exceptions. None of these are critical under normal conditions, but they represent code-quality debt worth fixing before production ramp-up.server/utils/accounts.ts(bitwise-shift delay, error handling inconsistency) andserver/utils/gcp.ts(undocumented constant).Last reviewed commit: 0275499
Context used:
dashboard- AGENTS.md (source)dashboard- CLAUDE.md (source)This is part 1 of 2 in a stack made with GitButler: