feat(cli-merge-pipeline): ai ステップで pending file 書き込み (task 1-B)#70
feat(cli-merge-pipeline): ai ステップで pending file 書き込み (task 1-B)#70
Conversation
…k 1-B) ADR-029 に沿って SKIP 実装だった post_steps の "ai" 分岐を pending file 書き込みに置き換える。hooks-stop-feedback-dispatch (task 1-C) と post-merge-feedback skill (task 1-E) の入力となる決定論的 artifact。 - PipelineContext 新設 (pr_number / owner_repo) - run_steps を Option<&PipelineContext> 拡張 (pre_steps=None) - pending_file module: PendingFile スキーマ、read_existing、write_atomic (tmp → rename)、is_valid_owner_repo (newline injection 防御) - run_ai_step: ADR-029 §競合ポリシー / §破損耐性 の全分岐を実装 (None→SKIP、corrupt→削除+上書き、consumed→上書き、active→WARN+PASS) - atomic rename の環境前提と fallback 方針を docstring に明記 - 書き込み失敗は WARN + PASS (merge 本体は完了済のため pipeline を止めない) - 22 件の unit test 追加 (pending_file 10 + run_ai_step 12)
📝 WalkthroughWalkthroughADR-029 準拠のポストマージ用ペンディングファイル処理を導入。PipelineContext と owner/repo 検証を追加し、既存ペンディング状態に応じて排他新規作成または tmp→rename 上書きでファイルを操作。fork安全なリモートブランチ削除と多数のユニットテストを追加。 Changes
Sequence Diagram(s)sequenceDiagram
participant Pipeline as Pipeline Executor
participant GH as gh CLI
participant FS as File System
Pipeline->>GH: gh repo view --json nameWithOwner
GH-->>Pipeline: owner/repo
Pipeline->>FS: read_existing(pending file path)
FS-->>Pipeline: None / Corrupt / Consumed / Active
alt None
Pipeline->>Pipeline: validate PipelineContext & owner/repo
Pipeline->>FS: write_new_exclusive (create_new / O_EXCL)
FS-->>Pipeline: Success / AlreadyExists
else Consumed or Corrupt
Pipeline->>FS: write_overwrite via tmp→rename
FS-->>Pipeline: Success
else Active
Pipeline->>Pipeline: skip AI step (log SKIP/WARN)
end
Pipeline->>GH: gh pr view --json headRefName,isCrossRepository
GH-->>Pipeline: headRefName + isCrossRepository
alt isCrossRepository == false
Pipeline->>GH: gh api DELETE /repos/.../git/refs/heads/{percent-encoded}
GH-->>Pipeline: delete result
else
Pipeline->>Pipeline: skip remote branch deletion
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/cli-merge-pipeline/src/main.rs`:
- Around line 485-493: The log currently interpolates the raw owner_repo into
log_step when pending_file::is_valid_owner_repo(owner_repo) fails, leaving it
vulnerable to log injection; update the log message in the log_step call to
escape control/newline characters (for example by using debug formatting like
"{:?}" or an escaping helper) so the owner_repo value is escaped before logging;
change the format string used in the log_step invocation that reports the
invalid owner_repo to include an escaped representation of owner_repo
(referencing pending_file::is_valid_owner_repo and the log_step call that
currently formats owner_repo).
- Around line 649-662: The code fetches only headRefName via run_gh_logged and
blindly deletes the same ref path from the current repo (ref_path built from
encoded_branch), which can accidentally delete a same-named branch in the base
repo for forked PRs; change the gh query to request both headRefName and
isCrossRepository (e.g. via run_gh_logged with "--json",
"headRefName,isCrossRepository"), skip the DELETE when isCrossRepository is
true, and replace the manual branch.replace('/', "%2F") with a proper
percent-encoding helper that encodes entire path segments (use it when building
ref_path) so special characters beyond '/' are safely encoded before passing to
Command::new("gh") .
In `@src/cli-merge-pipeline/src/pending_file.rs`:
- Around line 100-120: The atomic write uses a fixed tmp filename (tmp_path from
path.with_extension("json.tmp")) which can be shared by concurrent writers;
update write_atomic to create a unique temporary file per writer (e.g. include
PID, thread id, and a random/uuid nonce or use a directory-safe temp file API)
in the same directory as path, write the JSON to that unique tmp file, then
rename that unique tmp file to path and still remove the tmp on error; adjust
references to tmp_path and the std::fs::write / std::fs::rename /
std::fs::remove_file logic in write_atomic to operate on the per-writer tmp
path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2daa3bbd-7fa9-4f60-be62-2e6d83380f2a
📒 Files selected for processing (2)
src/cli-merge-pipeline/src/main.rssrc/cli-merge-pipeline/src/pending_file.rs
3 件の CodeRabbit 指摘に対応:
- Minor (main.rs:493): owner_repo ログを {:?} でエスケープ (log injection 防御)
- Major (main.rs:662): fork PR の上流 base branch 誤削除を防ぐため
isCrossRepository 判定を追加、percent_encode_path_segment helper で
branch 名の全特殊文字を RFC 3986 準拠で encode
- Major (pending_file.rs:120): write_atomic の tmp path を pid + 単調増加
counter で一意化し、並行 writer の競合を防止
テスト追加: percent_encode_path_segment (4 件) + should_skip_branch_delete
(2 件)。cargo test: 39 pass / clippy / fmt / release build: all clean。
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/cli-merge-pipeline/src/main.rs`:
- Around line 447-456: The log currently interpolates the unescaped reason from
ExistingPending::Corrupt directly into log_info/log_step which risks log
injection; update the logging calls (inside the match arm for
ExistingPending::Corrupt) to escape or debug-format the reason (e.g., use {:?}
or an explicit escape routine) when building the message passed to log_info and
log_step for the pending_path removal and warning message so control
characters/newlines are not rendered raw.
- Around line 560-575: The check for existing pending and the subsequent write
must be performed inside a single critical section: acquire an exclusive pending
lock before calling check_pending_preconditions/read_existing and hold it until
write_pending_and_log/write_atomic completes; if lock acquisition fails, bail
early. Modify call sites so try_acquire_pending_lock() is invoked prior to
check_pending_preconditions (and before constructing PendingFile) and its guard
is kept alive until after write_pending_and_log returns. Implement
try_acquire_pending_lock() to create the lock file with create_new(true) (or
equivalent) and remove it in Drop so the read_existing→write_atomic sequence is
serialized and TOCTOU is prevented.
- Around line 1191-1201: The test ai_step_leaves_no_tmp_residue_on_success
currently only checks for path.with_extension("json.tmp") but write_atomic()
creates temp files like "<file>.tmp.<pid>.<counter>", so update the test to look
in the parent directory of path (use the same base filename/prefix derived from
path.file_name()) and assert that no entries start with "<base>.tmp." (i.e.,
scan parent_dir.read_dir() and fail if any file_name starts_with format ".tmp."
pattern); keep references to run_ai_step() and PipelineContext as the setup
steps and ensure the assertion checks for any "<file>.tmp.*" residues rather
than only "*.json.tmp".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3f07a6d4-26e1-462a-b2c4-9161380bc7de
📒 Files selected for processing (2)
src/cli-merge-pipeline/src/main.rssrc/cli-merge-pipeline/src/pending_file.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/cli-merge-pipeline/src/pending_file.rs
CodeRabbit 2 回目レビューの 3 件の指摘に対応:
- Major (main.rs:575): pending file の read→write 間 TOCTOU race
による不可視ロストを解消。write_atomic を廃止し、新規作成経路を
`OpenOptions::create_new(true)` + fsync で直接排他作成する方式に変更。
AlreadyExists を検出したら WARN ログで取りこぼしを可視化する
(ADR-029 §競合ポリシーの「skip + WARN」保証を実装レベルで満たす)。
上書き経路は tmp → rename のまま。
- Minor (main.rs:456): Corrupt(reason) の log injection 対策で
`{}` → `{:?}` にエスケープ。determine_write_mode へリファクタ内包。
- Minor (main.rs:1201): tmp residue 検出テストを基底名 prefix scan
方式に更新し、`{name}.tmp.{pid}.{counter}` 形式の残骸も検知可能に。
付随的な変更:
- WriteError enum (AlreadyExists / Io / Serialize) を追加
- WriteMode enum (NewExclusive / Overwrite) を追加
- PendingFile.producer: Option<String> (schema v1 互換) を追加、
`cli-merge-pipeline@pid-{pid}@{iso8601}` 形式で取りこぼし追跡用
- ADR-029 §競合ポリシー / §破損耐性 / §Pending file JSON スキーマ v1 を改訂
(create_new 採用と producer フィールド追加を明記)
テスト追加: write_new_exclusive_creates_file,
write_new_exclusive_returns_already_exists_when_target_present,
write_new_exclusive_atomic_under_concurrent_writers,
write_overwrite_replaces_existing_file, producer_string,
pending_file_roundtrip_with_producer,
pending_file_without_producer_field_deserializes,
ai_step_sets_producer_field,
ai_step_warns_when_concurrent_writer_wins_new_exclusive。
cargo test: 46 pass / clippy / fmt / release build: all clean。
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/adr/adr-029-post-merge-feedback-auto-trigger.md (1)
160-170:⚠️ Potential issue | 🟡 MinorStop hook 側の破損分岐にも unknown
statusを明記してください。Line 138 では未知
statusを破損扱いにしていますが、この分岐表には対応行がありません。1-C 実装時の仕様漏れを防ぐため、hook 側の削除条件にも揃えてください。修正例
| JSON parse 失敗 | ファイル削除 + silent exit | | `schema_version` 不一致 | ファイル削除 + silent exit | +| `status` が `"pending"` / `"dispatched"` / `"consumed"` 以外 | ファイル削除 + silent exit | | stale (`created_at + 24h < now`) | ファイル削除 + silent exit | | `status == "pending"` | additionalContext 出力 + `status="dispatched"` に atomic 更新 |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/adr/adr-029-post-merge-feedback-auto-trigger.md` around lines 160 - 170, The table of behaviors is missing the "unknown status" case for the stop-hook branch; add a row for an unknown/unsupported `status` value (e.g., any value not in {"pending","dispatched","consumed"}) and document that it should be treated as a corrupted file: perform file deletion and a silent exit. Ensure the new entry mirrors the stop-hook deletion logic used for other corruption cases (see existing rows for "JSON parse 失敗", "schema_version 不一致", and "size 0") and reference the `status` field and the stop-hook handling so implementers know to delete the file when `status` is unknown.
🧹 Nitpick comments (1)
src/cli-merge-pipeline/src/main.rs (1)
1276-1297:AlreadyExists経路を実際の並行実行で検証してください。このテストは 2 回目が既存 Active を読んで skip するだけなので、Line 515-519 の
create_new AlreadyExistsWARN 分岐は通りません。競合保証の中心なので、Barrier 等で複数スレッドを同時にrun_ai_stepへ入れるテストを追加するか、テスト名を Active skip の検証に合わせてください。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli-merge-pipeline/src/main.rs` around lines 1276 - 1297, The test currently calls run_ai_step sequentially so it only exercises the "read existing Active and skip" path and never hits the create_new -> AlreadyExists warn branch; either update the test to actually provoke an AlreadyExists by spawning multiple threads that synchronize with a Barrier and concurrently call run_ai_step (using ai_step(None) and the same path) so one thread wins creation and others hit the AlreadyExists branch, or rename the test to clearly assert the Active-skip behavior; target the run_ai_step, ai_step, and the create_new/AlreadyExists branch when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/cli-merge-pipeline/src/pending_file.rs`:
- Around line 40-42: The comment for the static TMP_COUNTER is incorrect: update
its description to state it is used to generate unique tmp file names for
write_overwrite (not write_new_exclusive), because write_new_exclusive does not
use tmp paths; locate the static TMP_COUNTER declaration and replace the current
comment to clearly say it prevents tmp path collisions when multiple writers
call write_overwrite concurrently and mention AtomicU64::new(0) as the
initialization point.
---
Outside diff comments:
In `@docs/adr/adr-029-post-merge-feedback-auto-trigger.md`:
- Around line 160-170: The table of behaviors is missing the "unknown status"
case for the stop-hook branch; add a row for an unknown/unsupported `status`
value (e.g., any value not in {"pending","dispatched","consumed"}) and document
that it should be treated as a corrupted file: perform file deletion and a
silent exit. Ensure the new entry mirrors the stop-hook deletion logic used for
other corruption cases (see existing rows for "JSON parse 失敗", "schema_version
不一致", and "size 0") and reference the `status` field and the stop-hook handling
so implementers know to delete the file when `status` is unknown.
---
Nitpick comments:
In `@src/cli-merge-pipeline/src/main.rs`:
- Around line 1276-1297: The test currently calls run_ai_step sequentially so it
only exercises the "read existing Active and skip" path and never hits the
create_new -> AlreadyExists warn branch; either update the test to actually
provoke an AlreadyExists by spawning multiple threads that synchronize with a
Barrier and concurrently call run_ai_step (using ai_step(None) and the same
path) so one thread wins creation and others hit the AlreadyExists branch, or
rename the test to clearly assert the Active-skip behavior; target the
run_ai_step, ai_step, and the create_new/AlreadyExists branch when making the
change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ae7b0d65-92e8-4a6a-be44-95c4e89af34f
📒 Files selected for processing (3)
docs/adr/adr-029-post-merge-feedback-auto-trigger.mdsrc/cli-merge-pipeline/src/main.rssrc/cli-merge-pipeline/src/pending_file.rs
CodeRabbit 3 回目レビューの 1 件の Minor 指摘に対応: - Minor (pending_file.rs:41): TMP_COUNTER の doc comment が `write_new_exclusive` を参照していたが、同関数は tmp path を使わないため 実装と逆の説明になっていた。`write_overwrite` に修正し、 `write_new_exclusive` は本カウンタを参照しない旨を追記。 (round 2 で write_atomic を 2 経路に分離した際の sed 一括置換で コメント参照が誤って書き換わったもの。実装には影響なし。)
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/cli-merge-pipeline/src/pending_file.rs (2)
238-251:repoのスラッシュ重複チェックは冗長です (optional)。
split_once('/')は最初の/で分割するためrepoには追加の/が含まれ得ますが、直後のrepo.chars().all(is_repo_ident_char)が/を弾く (is_repo_ident_charは ASCII 英数字 +_.-のみ許可) ため、L244 の!repo.contains('/')は常に冗長にヒットします。ロジックの簡潔化のみの提案で挙動は不変。♻️ 簡略化案
pub(crate) fn is_valid_owner_repo(s: &str) -> bool { let Some((owner, repo)) = s.split_once('/') else { return false; }; !owner.is_empty() && !repo.is_empty() - && !repo.contains('/') && owner.chars().all(is_repo_ident_char) && repo.chars().all(is_repo_ident_char) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli-merge-pipeline/src/pending_file.rs` around lines 238 - 251, The check for `!repo.contains('/')` in is_valid_owner_repo is redundant because split_once('/') already splits at the first slash and the subsequent `repo.chars().all(is_repo_ident_char)` rejects '/' anyway; remove the `!repo.contains('/')` clause to simplify the logic while preserving behavior, keeping the rest of is_valid_owner_repo and is_repo_ident_char unchanged.
303-312: u64 入力下で負値分岐は到達不能 +DAYS_PER_ERA_M1の用途コメント補足 (optional)。
- L304 で
day_count = (epoch / SECS_PER_DAY) as i64は常に非負、L307 のzもCIVIL_EPOCH_OFFSET加算済で常に正のため、L308 のz < 0側は本関数では実行されません。Hinnant アルゴリズム原文への忠実性として残す判断はあり得ますが、u64 インタフェース前提ならコメントで明記 or 簡略化が読者に親切です。- L310 では
DAYS_PER_ERA_M1を yoe 計算の除数 (Hinnant 定数 146096) としても使っています。L283 の doc は「era-floor sign correction」にしか触れていないため、両用途を併記しておくと後続読者が戸惑いません。挙動への影響はなく、ドキュメンテーション/可読性のみの提案です。
♻️ コメント補足案
-/// DAYS_PER_ERA - 1; used for the era-floor sign correction. +/// DAYS_PER_ERA - 1 (= 146096). Used both for the era-floor sign correction +/// and as the denominator in Hinnant's yoe computation. const DAYS_PER_ERA_M1: i64 = 146_096;または、u64 前提を明示して分岐を畳む:
- let z = day_count + CIVIL_EPOCH_OFFSET; - let era = (if z >= 0 { z } else { z - DAYS_PER_ERA_M1 }) / DAYS_PER_ERA; + // epoch: u64 なので z は常に非負 (day_count + CIVIL_EPOCH_OFFSET > 0)。 + // 負値分岐は Hinnant 原文との対応のため残すが、本実装では到達しない。 + let z = day_count + CIVIL_EPOCH_OFFSET; + let era = z / DAYS_PER_ERA;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli-merge-pipeline/src/pending_file.rs` around lines 303 - 312, The negative-branch in epoch_secs_to_iso8601 is unreachable given the u64 epoch input (day_count and z are non-negative), so update the function: either remove/simplify the z/era computation to drop the z<0 branch or keep it but add a clear comment above the computation stating the u64 precondition and that the z<0 path is retained only for fidelity to Hinnant's algorithm; also expand the existing doc for DAYS_PER_ERA_M1 to note it serves both as the "era-floor sign correction" and as the divisor constant (146096) used in the yoe calculation so future readers understand both usages.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/cli-merge-pipeline/src/pending_file.rs`:
- Around line 238-251: The check for `!repo.contains('/')` in
is_valid_owner_repo is redundant because split_once('/') already splits at the
first slash and the subsequent `repo.chars().all(is_repo_ident_char)` rejects
'/' anyway; remove the `!repo.contains('/')` clause to simplify the logic while
preserving behavior, keeping the rest of is_valid_owner_repo and
is_repo_ident_char unchanged.
- Around line 303-312: The negative-branch in epoch_secs_to_iso8601 is
unreachable given the u64 epoch input (day_count and z are non-negative), so
update the function: either remove/simplify the z/era computation to drop the
z<0 branch or keep it but add a clear comment above the computation stating the
u64 precondition and that the z<0 path is retained only for fidelity to
Hinnant's algorithm; also expand the existing doc for DAYS_PER_ERA_M1 to note it
serves both as the "era-floor sign correction" and as the divisor constant
(146096) used in the yoe calculation so future readers understand both usages.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4ebba8b6-844a-4e86-b33e-930336670baf
📒 Files selected for processing (1)
src/cli-merge-pipeline/src/pending_file.rs
…72) - .claude/hooks-config.toml: プレースホルダの post_merge_learnings スタブを外し、type="ai" / prompt="post-merge-feedback" の post_steps を有効化。本プロジェクトで dogfood 開始。 - templates/hooks-config-{typescript,python}.toml: 派生プロジェクト 向けに同ブロックをコメントアウト状態で追記。post-merge-feedback skill が deploy されるまで opt-in。 - docs/todo.md: 1-D 削除 + section ヘッダー / 前文を除去。 1-E (skill 側 Phase 0) と 1-F (ADR-014 本採用化) は ADR-029 参照付きの独立 section に昇格。 ADR-029 試験運用の有効化。config のみの変更で exe 再ビルド不要 (cli-merge-pipeline.exe / hooks-stop-feedback-dispatch.exe は PR #70 / #71 で deploy 済み)。実地検証は次の pnpm merge-pr で発火。 Refs: ADR-029
Summary
PendingFilestruct を追加。post-merge のtype = "ai"ステップが atomic (tmp → rename) に書き込むPipelineContext { pr_number, owner_repo }を新設し、run_stepsをOption<&PipelineContext>拡張 (pre_steps は None で後方互換)is_valid_owner_repoで{owner}/{repo}形式を検証 (newline / 制御文字を拒否して pending file と additionalContext への injection を防御)run_pipelineの delete-branch をCommand::new("gh").args(...)に変更しcmd /c経由を排除 (pre-push-review security-F-001)Context
ADR-029 で定めた post-merge-feedback の自動起動フロー (
pnpm merge-pr後に Claude が/post-merge-feedbackを自動起動) の 3 コンポーネント (CLI 書き込み → Stop hook → skill) のうち、CLI 側 (task 1-B) を実装。cli-merge-pipelineのpost_steps.type = "ai"分岐はこれまでSKIPのプレースホルダだった。ADR-029 の決定に従って pending file 書き込みに置き換える.claude/hooks-config.tomlの post_steps 有効化 (task 1-D) / skill 側の Phase 0 (task 1-E) は別 PR。この PR は CLI 単独で完結する単位で切っているrun_ai_stepは初稿 116 行だったが、takt pre-push-review で本体 34 行 + 3 ヘルパー (validate_ai_step_context/check_pending_preconditions/write_pending_and_log) に分解済み。Validation
cargo test -p cli-merge-pipeline: 30 pass (新規 22:pending_file10 +run_ai_step12、既存 8)cargo test --workspace: 全 member green (0 failed)cargo clippy -p cli-merge-pipeline --all-targets -- -D warnings: warning なしcargo fmt -p cli-merge-pipeline --check: diff なしcargo build --release -p cli-merge-pipeline: exe 生成成功pnpm pushpre-push-review (takt, 3 iterations, 22m): verdict=APPROVErun_ai_stepの分解 / epoch helper のマジックナンバー定数化を自動適用delete-branchのコマンドインジェクション対策を自動適用References
post_steps.type = "ai"スロットの提供元PendingFile.promptは serde_json で JSON ファイルには安全に書き込まれるが、task 1-C がadditionalContextに embed する前に newline / control char の sanitize が必要Summary by CodeRabbit
新機能
改善
ドキュメント
テスト