feat: migrate shell-based CI and devops tooling to xtask#2945
feat: migrate shell-based CI and devops tooling to xtask#2945EffortlessSteven merged 15 commits intomasterfrom
Conversation
Summary by CodeRabbit
WalkthroughReplaces many in-repo shell/Python scripts with thin wrappers that exec into new Rust Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant LegacyScript as "legacy script wrapper"
participant CargoXtask as "cargo xtask (CLI)"
participant Xtask as "xtask task module"
participant External as "External (FS / git / gh / subprocess)"
User->>LegacyScript: invoke script (args)
LegacyScript->>CargoXtask: exec cargo xtask <cmd> (forward args)
CargoXtask->>Xtask: dispatch to Rust task handler
Xtask->>External: perform IO / git / gh / subprocess calls
External-->>Xtask: return data / status
Xtask-->>CargoXtask: return Result / exit code
CargoXtask-->>LegacyScript: process exit propagated
LegacyScript-->>User: final exit/output
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
040c289 to
6e77907
Compare
|
Warning Gemini encountered an error creating the summary. You can try again by commenting |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6e779079e2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let exclude = manifest | ||
| .get("exclude") | ||
| .and_then(Value::as_array) | ||
| .ok_or_else(|| eyre!("Workspace has no root-level `exclude` array"))?; |
There was a problem hiding this comment.
Read workspace exclude list from the correct table
check_exclude_section looks for exclude at the TOML root, but this repository defines it under [workspace] in Cargo.toml. As written, manifest.get("exclude") always resolves to None and the command errors with "Workspace has no root-level exclude array" even when the config is valid, so cargo xtask validate-workspace-exclusions is effectively broken on the current manifest layout.
Useful? React with 👍 / 👎.
| .unwrap_or("(unknown workflow)") | ||
| .to_string(); | ||
|
|
||
| let conclusion = run.get("conclusion").and_then(Value::as_str).unwrap_or(""); |
There was a problem hiding this comment.
Exclude unfinished runs from failed-run accounting
The cost monitor now defaults missing conclusion to an empty string, and later treats every non-"success" conclusion as a failure. GitHub Actions REST run listings include in-progress runs with conclusion: null, so active runs are miscounted as failed (and skew failure-rate output) whenever the sampling window includes running workflows; the previous shell implementation filtered out runs without a conclusion.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 52
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.ci/scripts/README.md:
- Around line 62-67: Update the README usage example so the shim is invoked
explicitly from the repository root: replace the ambiguous
"./measure-ci-baseline.sh" with "./.ci/scripts/measure-ci-baseline.sh" (or
alternatively add a short note stating "run from .ci/scripts" before the
example). Ensure the example in the "Usage (wrapper shim)" section shows the
explicit path "./.ci/scripts/measure-ci-baseline.sh" so readers can run the
command from the repo root.
- Around line 22-38: The "Prerequisites" and "Installation of prerequisites"
sections currently list "Rust toolchain - for the workspace xtask binary" and
"`cargo`" but provide no install steps; update those sections to add a short
Rust install instruction and link to rustup (e.g., mention "Install Rust via
rustup: https://rustup.rs" and include platform-specific commands such as
running the rustup installer and setting the default toolchain), and note that
`cargo` is installed with rustup so first-time contributors can run `cargo
xtask` without extra setup — update the text near the "Rust toolchain - for the
workspace xtask binary" and the "Installation of prerequisites" block to reflect
this.
In `@CONTRIBUTING.md`:
- Around line 103-109: Update the Bash example so the placeholder and the
multi-line body string are properly quoted: change the pr assignment to quote
the placeholder (e.g., pr="<number>") to avoid shell redirection, and close the
open double-quote for the --body value passed to gh pr merge (ensure the
multi-line body string is terminated before the code fence ends); adjust the
example around the pr variable and the gh pr merge --body argument to be a
valid, copy-pastable Bash snippet.
In `@docs/CHANGELOG_WORKFLOW.md`:
- Around line 182-194: The bash examples use an unquoted version placeholder
that Bash treats as input redirection; update both occurrences of the command
string "cargo xtask release-turnkey <0.x.y>" in the docs so the placeholder is
quoted (e.g., wrap the <0.x.y> token in single or double quotes) in the
canonical command and the alternative manual release example to prevent shell
redirection errors.
In `@docs/project/XTASK_MIGRATION.md`:
- Around line 223-229: The migration totals table in XTASK_MIGRATION.md is out
of sync (the "Not Started" count shows 2 but only scripts/forensics/lib_gh.sh is
listed); update the hardcoded counts (e.g., the "Not Started", "Total scripts",
and "Total Lines" row values) to match the regenerated totals for the current
file list OR replace the numeric totals with non-numeric wording. Locate the
table entries shown (Category/Count/Total Lines) and either run the same script
you used to produce these metrics to recompute and replace the numbers, or
change the wording to avoid hardcoded metrics.
In `@justfile`:
- Around line 1619-1620: The justfile target version-check invokes a
non-existent xtask subcommand check-version-sync; either add and dispatch a
corresponding command handler named "check-version-sync" in xtask/src/main.rs
(ensure the CLI matches existing dispatch pattern and calls the
check/version-sync logic), or update the justfile to call the actual xtask
subcommand that is implemented (e.g., change cargo xtask check-version-sync to
the real subcommand name already defined in xtask/src/main.rs). Ensure the
symbol names match exactly so release-gate's dependency on version-check runs
the wired xtask command.
In `@scripts/build-timing-receipt.sh`:
- Around line 1-3: The script build-timing-receipt.sh doesn't change to the
repository root before invoking cargo xtask which breaks runs from outside the
workspace; update the script to compute REPO_ROOT the same way as the other
wrappers (set REPO_ROOT to the directory containing the repo root) and cd
"$REPO_ROOT" before running exec cargo xtask build-timing-receipt "$@"; modify
the top of the file around the existing exec line (the invocation of cargo xtask
build-timing-receipt "$@") to first define REPO_ROOT and perform cd into it so
the rest of the script runs from the repository root.
In `@scripts/ci-audit-workflows.py`:
- Around line 5-11: The script drops CLI args; update
scripts/ci-audit-workflows.py to import sys and forward sys.argv[1:] into the
subprocess call so callers can pass flags (e.g., use
subprocess.call(["cargo","xtask","ci-audit-workflows", *sys.argv[1:]],
cwd=repo_root) and preserve the existing exit behavior via raise SystemExit or
sys.exit); modify the top-level block around repo_root and the subprocess.call
invocation (referencing repo_root and the subprocess.call invocation) to include
the forwarded args.
In `@scripts/compare-build-timing.sh`:
- Around line 1-3: Restore repository-root normalization before executing the
xtask: set REPO_ROOT to the repository root (matching the pattern used by other
xtask wrappers) and cd "$REPO_ROOT" before calling exec cargo xtask
compare-build-timing "$@"; update compare-build-timing.sh to perform the
repo-root resolution and change directory so the wrapper works when invoked from
outside the workspace.
In `@scripts/devex-doctor.sh`:
- Line 8: The script's exec line drops all incoming CLI args; update the exec
invocation in scripts/devex-doctor.sh (the line invoking cargo xtask
devex-doctor) to forward the caller's arguments by appending "$@" so the shim
preserves the original CLI surface and passes any flags/options through to cargo
xtask devex-doctor.
In `@scripts/preflight.sh`:
- Line 4: The wrapper script scripts/preflight.sh is now cwd-dependent because
the exec line (exec cargo xtask ci-hygiene preflight "$@") no longer changes
into the repo root; restore location-independence by resolving the repository
root (for example via dirname "$0" or git rev-parse --show-toplevel) at the top
of scripts/preflight.sh and cd to that path before running exec cargo xtask
ci-hygiene preflight "$@"; update the script to compute and switch to the repo
root so callers can invoke the script by path from any CWD.
In `@scripts/quick-receipts.sh`:
- Line 4: The script scripts/quick-receipts.sh currently execs cargo xtask
directly and thus depends on the caller's CWD; update the script to first change
to the repository-root directory (e.g. compute the script dir via dirname
"${BASH_SOURCE[0]}" and walk up to the repo root) before calling exec cargo
xtask ci-hygiene quick-receipts "$@" so the shim behaves like the original
wrapper and is runnable via its script path regardless of ambient CWD.
In `@scripts/test_edge_cases.sh`:
- Line 4: The current wrapper script test_edge_cases.sh calls exec cargo xtask
ci-hygiene test-edge-cases "$@" which relies on the caller's cwd and breaks
path-based invocation; either remove the wrapper and update callers to invoke
cargo xtask ci-hygiene test-edge-cases directly (per XTASK_MIGRATION.md) or make
the script robust by anchoring Cargo to the repo root before exec (e.g., compute
repo root relative to the script location and cd there or pass --manifest-path
pointing to the repo Cargo.toml) so exec cargo xtask ci-hygiene test-edge-cases
"$@" always runs from the correct workspace.
In `@scripts/test_iterative_parser.sh`:
- Line 4: The script lost repo-root resolution so callers running it from
outside the repo now inherit the caller CWD; restore repo detection by computing
REPO_ROOT (e.g. via git rev-parse --show-toplevel or resolving the script
directory) and either cd "$REPO_ROOT" before the final exec or pass
--manifest-path "$REPO_ROOT/Cargo.toml" to the cargo xtask invocation; update
the line that runs cargo xtask (the exec cargo xtask ci-hygiene
test-iterative-parser "$@") to use the computed REPO_ROOT (export REPO_ROOT or
cd into it) so the xtask always runs from the repository root.
In `@scripts/test-capped.sh`:
- Line 4: The wrapper loses repo-root resolution and inherits the caller’s CWD;
restore location-independence in scripts/test-capped.sh by computing the
repository root from the script location (e.g., resolve $0 to the script
directory and its parent), set REPO_ROOT (or export it) and invoke cargo xtask
with the --manifest-path pointing to "$REPO_ROOT/Cargo.toml" (instead of calling
cargo from the caller CWD), preserving original args when delegating to exec
cargo xtask ci-hygiene test-capped "$@".
In `@scripts/test-lsp-cancellation.sh`:
- Line 4: The shim currently calls exec cargo xtask ... without preserving the
repository root, causing relative paths to resolve from the caller CWD; restore
the original repo-root hop by resolving REPO_ROOT (as the previous wrapper did)
and forward Cargo the manifest path (e.g. pass --manifest-path
"$REPO_ROOT/Cargo.toml") before the xtask arguments so test-lsp-cancellation
runs from the repository root; update the script around exec cargo xtask
ci-hygiene test-lsp-cancellation "$@" to compute REPO_ROOT and include the
--manifest-path flag when invoking cargo.
In `@scripts/update-current-status.py`:
- Around line 5-13: The script currently exits via raise
SystemExit(subprocess.call(...)) which loses signal semantics; change the
__main__ block in update-current-status.py to perform exec-style delegation:
resolve the cargo executable path and arguments (use repo_root as computed),
build the argv list ["cargo","xtask","update-status", *sys.argv[1:]], and
replace the Python process with os.execv (or os.execve) to run cargo so Ctrl-C
and job cancellation propagate; remove the subprocess.call/SystemExit pattern
and ensure the same os.execv approach is applied to the other scripts mentioned.
In `@scripts/validate-workspace-exclusions.sh`:
- Line 3: The script currently runs `cargo xtask validate-workspace-exclusions
"$@"` relative to the caller's CWD; make it location-independent by resolving
the repo root (e.g., compute REPO_ROOT using the script path or `git rev-parse
--show-toplevel`) at the top of scripts/validate-workspace-exclusions.sh and
invoke `cargo xtask` from that directory (either with `cd "$REPO_ROOT"` before
the call or by using an explicit `-C "$REPO_ROOT"`/equivalent), ensuring the
call to `cargo xtask validate-workspace-exclusions "$@"` always runs with
REPO_ROOT as the working directory.
In `@xtask/src/main.rs`:
- Around line 812-813: The enum variant DocClaims currently has the wrong help
text ("Check invariants in features.toml"); update the doc/help string attached
to the DocClaims variant in the CLI enum (the DocClaims item) to the correct
description for the doc-claims command (e.g., "Generate documentation for
doc-claims" or another accurate short description) so that `cargo xtask --help`
shows the proper text for DocClaims.
In `@xtask/src/tasks/benchmarks.rs`:
- Around line 47-61: format_benchmarks currently constructs the script path
using project_root() but builds the JSON source as
Path::new("benchmarks/results/latest.json"), which is resolved against the
current working directory and breaks when invoked from a subdirectory; change
construction of the source to be rooted at project_root() (e.g. let source =
root.join("benchmarks").join("results").join("latest.json")) and pass its
display/string into args so run_python_script receives repo-rooted paths; apply
the same root-anchoring fix to the other helper functions referenced around the
second block (the ones building paths for benchmark scripts/results).
- Around line 501-506: The current check using Command::new("git").args(["diff",
"--quiet"]) (result assigned to the dirty variable) only detects unstaged
working-tree changes; replace it with a robust check that considers staged and
untracked files (for example, run "git diff --cached --quiet" and also check
"git ls-files --others --exclude-standard" or use "git status --porcelain" and
treat any non-empty output as dirty). Update the logic that sets dirty to run
those commands (or a single git status --porcelain invocation) and map their
outputs to a boolean so staged, unstaged, and untracked changes are all detected
when computing git_dirty.
- Around line 391-406: The current parsing of Criterion paths misassigns
group/benchmark; change the logic that builds group and bench_name from parts:
if parts.len() == 3 (simple layout like my_benchmark/new/estimates.json) require
parts[1] == "new" and set group = "" (or None) and bench_name = parts[0]; if
parts.len() >= 4 (grouped layout like group/benchmark/{base,new}/estimates.json)
require parts[2] == "new" and set group = parts[0] and bench_name = parts[1];
otherwise skip the entry. Update the code around entry.path()/parts handling
(the variables parts, group, bench_name and the call categorize_benchmark(group,
bench_name)) so only "new" samples are processed and simple and grouped layouts
map to distinct (group,bench_name) keys to avoid overwrites.
In `@xtask/src/tasks/build_timing.rs`:
- Around line 446-464: Change run_command_silently to return Option<f64> (not
f64) and only return Some(duration) when the Command::status call succeeds and
status.success() is true; on Err or non-successing exits log the existing
warning (using command_to_string) and return None. Update call sites that
currently expect f64 (the build-timing-receipt code path) to skip emitting
timing receipts when run_command_silently returns None so failed builds/tests
don’t produce timing entries.
- Around line 103-117: The touch_file behavior is unreliable because it opens
and writes zero bytes which may not update mtime; change touch_file (used where
touch_file(&root.join(LSP_PROVIDERS_LIB)) and touch_file(&root.join(PARSER_LIB))
are called) to create parent dirs, open or create the file, and call
file.set_modified(std::time::SystemTime::now()) (instead of a zero-byte write)
returning any errors; ensure you use std::fs::OpenOptions to create/write and
propagate errors with the existing Result/error-context pattern.
In `@xtask/src/tasks/check_toolchain.rs`:
- Around line 37-39: The branch that currently returns Ok(()) when
required_parts.is_empty() should instead fail fast so CI catches a bad or
unparsable rust-toolchain.toml; in the required_parts.is_empty() branch (in the
check_toolchain logic) replace the silent success with a failure return (e.g.,
return an Err or use bail!/anyhow! with a clear message) so the check-toolchain
command exits non-zero when the pinned toolchain cannot be interpreted.
In `@xtask/src/tasks/check_version_sync.rs`:
- Around line 17-30: The branch that builds the Command for running the local
binary and the branch that runs `cargo` don't set a working directory, which can
lead to inconsistent behavior; update the Command construction (the variable
`cmd` created in both branches) to call `.current_dir(&root)` so both the local
binary path branch and the `Command::new("cargo")` / `cmd.args([...])` branch
run with the repository root (`root`) as the working directory.
In `@xtask/src/tasks/ci_audit_workflows.rs`:
- Around line 124-150: Convert the three tests (pr_trigger_matches_map_form,
pr_trigger_matches_list_form, pr_trigger_rejects_non_pr_trigger) to the
Result<()> test pattern: change their signatures to return Result<(), Box<dyn
std::error::Error>> (or just Result<()> if project alias exists), replace
serde_yaml_ng::from_str(...).expect("valid yaml") with
perl_tdd_support::must(...) (or must_some equivalent) to assert parsing without
panicking, keep the assert!(has_pr_trigger(&workflow)) checks as-is, add any
needed use/import for perl_tdd_support::must at top of the test module, and end
each test with Ok(()).
In `@xtask/src/tasks/ci_hygiene.rs`:
- Around line 12-48: The duplicate local_binary_path and CI_HYGIENE_PACKAGE
definitions should be moved to a shared utility so both run functions can reuse
them: create a new public helper (e.g., crate::utils::local_binary_path(root:
&PathBuf) -> PathBuf and pub const CI_HYGIENE_PACKAGE: &str) that implements the
same path construction and Windows extension logic, replace the local
definitions in xtask/src/tasks/ci_hygiene.rs and in forbid_fatal_constructs.rs
to use crate::utils::{local_binary_path, CI_HYGIENE_PACKAGE}, and remove the
duplicated functions/constants from those modules; update any use/imports and
visibility as needed so run (and the forbids module) compile against the
centralized helpers.
- Around line 42-48: Change the parameter type of local_binary_path from
&PathBuf to &Path to be less restrictive: update the function signature fn
local_binary_path(root: &Path) -> PathBuf and any internal uses (keep
root.join(...) working since Path has join); ensure callers passing &PathBuf
continue to compile (they can pass &pathbuf) and add or adjust a use
std::path::Path import if necessary.
In `@xtask/src/tasks/ci_metrics.rs`:
- Around line 652-659: The percentile function uses the nearest-rank algorithm
which can yield surprising results for small samples (e.g., P95 equal to median
for n=2–3); update the documentation for the percentile function (fn percentile)
by adding a doc comment explaining that it implements the nearest-rank method,
describing the index calculation ((len-1) * p/100, floored), and noting the
caveat for small sample sizes so callers understand the behavior and
limitations.
- Around line 12-14: The three hardcoded budget constants (COST_PER_MINUTE,
MONTHLY_BUDGET_TARGET, ANNUAL_BUDGET_TARGET) should be made configurable;
replace these consts with values loaded from configuration or CLI flags (e.g.,
via clap or a small config struct) while preserving the current values as
defaults, add parsing/validation where the task entry point (the function that
uses these constants) constructs the config, and propagate the config fields
instead of referencing the const names directly so per-repo overrides are
possible.
- Line 571: Replace the println! call that currently uses an explicit empty
string argument (the line containing println!("");) with the zero-argument macro
form println!() to print a blank line; update the occurrence in
xtask::tasks::ci_metrics (the println invocation on the reported line)
accordingly.
- Around line 630-638: The read_timestamp function prematurely returns None due
to the `?` on run.get(...).and_then(Value::as_str), which aborts the loop if the
first key is missing; change the loop to explicitly check the Option (e.g., use
if let Some(value) = run.get(*key).and_then(Value::as_str) or match) so you only
attempt to parse when a string exists and continue iterating over remaining keys
otherwise, and keep the existing DateTime::parse_from_rfc3339(...) handling and
final None return behavior.
In `@xtask/src/tasks/ci_policy.rs`:
- Around line 40-41: The Regex construction for disallow_re and allowed_re
currently uses Regex::new(...)? which intentionally fails fast on an invalid
constant pattern; add a brief comment above the two lines (referencing
disallow_re and allowed_re / Regex::new(FROM_RAW_PATTERN) and
Regex::new(ALLOWED_FROM_RAW_PATTERN)) stating that these patterns are
compile-time constants and an invalid pattern is a developer bug so early-return
with `?` (or panic) is deliberate; alternatively, if you prefer graceful
degradation, change to Regex::new(...).ok() and update callers to handle
Option<Regex>.
- Around line 30-35: The current check on output.status.code() ignores the case
where the process was terminated by a signal (code() == None); update the logic
in the block that handles git grep to treat None as an error: when
output.status.code().is_none() bail with a clear message that the process was
terminated by a signal and include output.status and the stderr (the existing
stderr variable) to aid debugging; keep the existing behavior for code == 1
(treat as non-error) and code > 1 (bail with exit code and stderr).
In `@xtask/src/tasks/devex_doctor.rs`:
- Around line 11-13: The run() function currently uses env::current_dir() only
for display and then launches a bash subprocess to check "rust-toolchain.toml",
which is fragile and Unix-only; replace that behavior by resolving paths with
project_root() (from utils.rs) and calling the existing
check_toolchain::run(true) directly instead of spawning bash. Concretely: remove
the env::current_dir() usage for toolchain resolution, call project_root() to
build an absolute path to "rust-toolchain.toml" when needed for
messages/validation, and replace the bash subprocess block (the logic around the
current shell spawn) with a direct call to check_toolchain::run(true), ensuring
the toolchain check runs portably and deterministically.
In `@xtask/src/tasks/doc_claims.rs`:
- Around line 9-30: STALE_PATTERNS is hardcoded; move these tuples into an
external data file (JSON/TOML or plain-text like
docs/project/PUBLICATION_FACTS_LEDGER.md) and replace the const with a
runtime-loaded Vec by adding a loader function (e.g., fn load_stale_patterns()
-> Result<Vec<(String,String,String)>, Error>) and call it from the code that
currently references STALE_PATTERNS; implement parsing (use
serde_json/serde_toml for structured files or a simple line parser for plain
text) and provide a cached static like once_cell::sync::Lazy or lazy_static!
static ref STALE_PATTERNS: Vec<...> = load_stale_patterns().expect("...") so
callers (the existing code that uses STALE_PATTERNS) keep the same API while the
values are maintainable externally.
In `@xtask/src/tasks/forbid_fatal_constructs.rs`:
- Around line 14-20: Change the local_binary_path signature to accept &Path
instead of &PathBuf: update fn local_binary_path(root: &Path) -> PathBuf and use
root.join(...) as before; also update any callers passing &PathBuf to pass
root.as_path() or take a &Path directly so types align. Ensure
CI_HYGIENE_PACKAGE and the cfg!(windows) extension logic remain unchanged and
that imports include std::path::Path if not already present.
In `@xtask/src/tasks/forensics.rs`:
- Around line 45-56: The run_script function currently doesn't set the working
directory, so scripts that expect the repository root fail; modify run_script to
accept a root: &Path parameter and call .current_dir(root) on the Command (in
the function run_script) and then update all callers (the public functions that
invoke run_script) to pass &root as the first argument so the forensics scripts
run from the repository root.
In `@xtask/src/tasks/github.rs`:
- Around line 26-41: The run_script function currently adds arguments in a for
loop; replace that loop with a single .args(args) call on the Command to
simplify argument handling (i.e. keep Command::new("bash").arg(script) but
change the manual loop to .args(args)), preserving the existing status check,
error context on command.status(), and the bail! on non-success so behavior
remains identical.
In `@xtask/src/tasks/hardening.rs`:
- Around line 27-38: run_script currently spawns Command::new("bash") without
setting a working directory, so scripts that rely on repo-root-relative paths
can fail; modify the run_script signature to accept a root: &Path (or pass root
through from the callers that compute it) and call .current_dir(root) on the
Command before .status(), keeping the existing error handling (the function name
run_script and the Command::new("bash") invocation are the places to update).
In `@xtask/src/tasks/hook_checks.rs`:
- Line 114: Change the Regex::new(...) call that initializes ts_re to return an
Option by using Regex::new(r#""ts":"[0-9]{4}-"#).ok() and store it in a variable
like ts_re: Option<Regex>; then update any subsequent timestamp assertion logic
(where ts_re is used) to first check if ts_re.is_some() or match on ts_re and
skip the timestamp assertion when ts_re is None so a failed regex compilation
does not abort the hook test command.
- Around line 265-290: The run_script function currently calls
child.wait_with_output() which can block indefinitely; change it to use a
deadline: call wait_timeout (from the wait-timeout crate) or a similar timeout
API on the spawned child with a configurable Duration, and if it returns None
(timed out) kill the child (child.kill()), wait for termination, then call
child.wait_with_output() to collect stdout/stderr; include the captured
stdout/stderr text in the timeout error context so the caller sees output from
the timed-out hooks (e.g., task-completed.sh, subagent-stop.sh, pre-tool-use.sh)
and surface a clear timeout error instead of hanging.
In `@xtask/src/tasks/inject_sha_assets.rs`:
- Around line 27-32: The release tasks are using inconsistent artifact suffixes
and version formats (e.g., MAC_ARM, LIN_ARM, LIN_X64, WIN_* constants in
inject_sha_assets.rs and update_homebrew.rs stripping the leading 'v'
differently), causing checksum lookups to fail; extract the artifact suffix
selection and version normalization into a shared helper (e.g., new function
normalize_version_and_suffix(version: &str, target: &str) or two helpers
normalize_version and artifact_suffix_for_target) and replace the ad-hoc logic
in inject_sha_assets.rs (places that build filenames around config.version at
lines referenced as 85 and 200-201) and update_homebrew.rs (the code that looks
up unknown-linux-gnu and strips the leading 'v' at its line ~140) to call the
shared helper so both tasks use the same suffix mapping and
stripped/non-stripped version consistently.
In `@xtask/src/tasks/populate_book.rs`:
- Around line 12-16: The call to Command::arg currently does
.arg(script.clone()), which needlessly clones the PathBuf; change it to pass a
reference (e.g. .arg(&script) or .arg(script.as_os_str())) so Command::arg
receives an impl AsRef<OsStr> without allocating; update the invocation around
Command::new("bash").current_dir(&root).arg(...) that produces status to remove
.clone().
In `@xtask/src/tasks/publish.rs`:
- Around line 210-227: Wrap the Command::new("gh")...status()? call with
anyhow::Context to provide a descriptive error when the command fails to
spawn/execute (e.g., replace the trailing .status()? with
.status().context(format!("failed to run `gh workflow run Publish to crates.io`
for version {} on ref {}", version, ref_name))?); also update the subsequent
bail! to include version and ref_name (e.g., change bail!("publish-release
workflow dispatch failed"); to bail!("publish-release workflow dispatch failed
for version {} on ref {}", version, ref_name)) so both execution errors and
non-success statuses carry the version/ref context.
- Around line 233-248: The call to Command::new(...).status() in
smoke_test_release lacks contextual error information; wrap the status() call
with anyhow::Context (e.g., .with_context or .context) to add a clear message
including the script and version (referencing smoke_test_release and the
Command::new(...) invocation) so failures show "running smoke-test-release.sh
with version <version>" (also add use anyhow::Context if not already imported).
In `@xtask/src/tasks/swarm_summary.rs`:
- Around line 47-48: Replace the explicit zero initialization with the default
initializer: change both uses of .or_insert(0) on the HashMap entries (the calls
on by_event.entry(event.clone().into_owned()) and
by_agent.entry(agent_type.clone().into_owned())) to .or_default() so the counts
are initialized via Default instead of hardcoding 0.
In `@xtask/src/tasks/update_homebrew.rs`:
- Around line 217-224: The Homebrew test currently pipes raw JSON to perl-lsp
via the input variable and pipe_output (see the input string and the
pipe_output("#{{bin}}/perl-lsp --stdio", input, 0) call), which violates the LSP
stdio framing; update the test to either remove the stdio initialization test
and keep only the --version assertion, or frame the request properly by
prefixing the JSON body with a Content-Length header and a CRLFCRLF separator
(e.g. construct a string like "Content-Length: <byte_len>\r\n\r\n<json>" and
pass that to pipe_output) so perl-lsp --stdio receives a correctly framed LSP
initialize request.
In `@xtask/src/tasks/validate_workspace_exclusions.rs`:
- Around line 169-232: The current regex-based check in
check_member_dependencies / has_excluded_dependency_reference misses table-style
and renamed dependencies; replace the line-oriented grep with proper TOML
parsing: parse each manifest_content into a toml::Value (or use cargo_toml),
then inspect standard dependency tables ("dependencies", "dev-dependencies",
"build-dependencies") and nested "target" tables, checking both dependency keys
(e.g., "foo" from [dependencies.foo]) and any "package" renames (e.g., { package
= "tree-sitter-perl-c" }) against the excluded set returned by excluded_set();
update has_excluded_dependency_reference to return true when either the
dependency key or its package field matches an excluded crate. Ensure
check_member_dependencies still reads manifests and uses the updated predicate.
- Around line 98-106: The current check only looks at dependency keys in
workspace_dependencies and misses aliased entries where the actual crate appears
under a "package" field (e.g., alias = { package = "tree-sitter-perl-c", ... }).
Update the test that uses workspace_dependencies and excluded_set() to consider
both the key and each dependency value's "package" field: iterate the
workspace_dependencies table entries and for each (key, value) treat it as
excluded if key matches excluded_set() OR if value is a table and
value.get("package").and_then(Value::as_str()) matches an excluded name; then
bail if any match is found. Ensure you reference workspace_dependencies,
excluded_set(), and Value::as_table when locating the code to change.
In `@xtask/src/tasks/worktrees.rs`:
- Line 4: The import list includes an unused symbol `eyre` (the line importing
color_eyre::eyre::{Result, bail, eyre}); remove `eyre` from that use declaration
so only `Result` and `bail` are imported, or if you intended to use the `eyre!`
macro replace its usage accordingly—update the import in the module containing
the worktree helpers (the use statement referencing color_eyre::eyre) to
eliminate the unused `eyre` symbol.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 87465b52-e8be-4ca8-9156-1ae542fe7c94
📒 Files selected for processing (85)
.ci/scripts/README.md.ci/scripts/measure-ci-baseline.shCONTRIBUTING.mdCargo.tomldocs/CHANGELOG_WORKFLOW.mddocs/PUBLISHING.mddocs/RELEASE_PROCESS.mddocs/handoff/swarm-pack/commands/green-merge.mddocs/project/CI_COST_TRACKING.mddocs/project/CI_TEST_LANES.mddocs/project/PUBLISHING_ROADMAP.mddocs/project/RELEASE_CHECKLIST.mddocs/project/XTASK_MIGRATION.mdjustfilescripts/DEPRECATED_RELEASE_SCRIPTS.mdscripts/build-timing-receipt.shscripts/check-doc-claims.pyscripts/check-version-sync.shscripts/check_features_invariants.pyscripts/ci-audit-workflows.pyscripts/ci-cost-monitor.shscripts/cleanup-worktrees.shscripts/compare-build-timing.shscripts/dead-code-check.shscripts/debt-pr-summary.pyscripts/debt-report.pyscripts/devex-doctor.shscripts/devex-targeted-checks.shscripts/e2e-gate.shscripts/forbid-fatal-constructs.shscripts/generate-badges.shscripts/ignored-test-count.shscripts/install-githooks.shscripts/preflight.shscripts/prep-crates-io-launch.shscripts/prepare-release.shscripts/publish-receipts.shscripts/publish-release.shscripts/quick-receipts.shscripts/run_parser_comparison.shscripts/smoke-test-release.shscripts/swarm-summary.shscripts/test-capped.shscripts/test-e2e-capped.shscripts/test-lsp-cancellation.shscripts/test_edge_cases.shscripts/test_iterative_parser.shscripts/update-current-status.pyscripts/update-homebrew.shscripts/update-parser-matrix.pyscripts/validate-workspace-exclusions.shscripts/verify_stacker.shxtask/src/main.rsxtask/src/tasks/benchmarks.rsxtask/src/tasks/build_timing.rsxtask/src/tasks/check_toolchain.rsxtask/src/tasks/check_version_sync.rsxtask/src/tasks/ci_audit_workflows.rsxtask/src/tasks/ci_hygiene.rsxtask/src/tasks/ci_measure.rsxtask/src/tasks/ci_metrics.rsxtask/src/tasks/ci_policy.rsxtask/src/tasks/corpus_audit.rsxtask/src/tasks/debt_report.rsxtask/src/tasks/devex_doctor.rsxtask/src/tasks/doc_claims.rsxtask/src/tasks/features.rsxtask/src/tasks/forbid_fatal_constructs.rsxtask/src/tasks/forensics.rsxtask/src/tasks/github.rsxtask/src/tasks/hardening.rsxtask/src/tasks/hook_checks.rsxtask/src/tasks/inject_sha_assets.rsxtask/src/tasks/mod.rsxtask/src/tasks/parser_matrix.rsxtask/src/tasks/populate_book.rsxtask/src/tasks/prep_crates_io_launch.rsxtask/src/tasks/publication_facts.rsxtask/src/tasks/publish.rsxtask/src/tasks/publish_receipts.rsxtask/src/tasks/release_turnkey.rsxtask/src/tasks/swarm_summary.rsxtask/src/tasks/update_homebrew.rsxtask/src/tasks/validate_workspace_exclusions.rsxtask/src/tasks/worktrees.rs
💤 Files with no reviewable changes (1)
- Cargo.toml
| **Prerequisites:** | ||
| - [GitHub CLI (gh)](https://cli.github.com/) - installed and authenticated | ||
| - [jq](https://stedolan.github.io/jq/) - for JSON processing | ||
| - Rust toolchain - for the workspace xtask binary | ||
| - `cargo` - comes with Rust, used to run `cargo xtask` | ||
|
|
||
| **Installation of prerequisites:** | ||
|
|
||
| ```bash | ||
| # macOS | ||
| brew install gh jq | ||
| brew install gh | ||
|
|
||
| # Ubuntu/Debian | ||
| sudo apt install gh jq | ||
| sudo apt install gh | ||
|
|
||
| # Then authenticate | ||
| # Authenticate once for CLI queries | ||
| gh auth login | ||
| ``` |
There was a problem hiding this comment.
Document how to install Rust now that it is required.
The prerequisites now include Rust/cargo, but the installation section still only shows how to install gh. Add a rustup link or a short Rust install step so this page stays runnable end to end for first-time contributors.
🧰 Tools
🪛 LanguageTool
[typographical] ~23-~23: To join two clauses or introduce examples, consider using an em dash.
Context: ...itHub CLI (gh)](https://cli.github.com/) - installed and authenticated - Rust toolc...
(DASH_RULE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.ci/scripts/README.md around lines 22 - 38, The "Prerequisites" and
"Installation of prerequisites" sections currently list "Rust toolchain - for
the workspace xtask binary" and "`cargo`" but provide no install steps; update
those sections to add a short Rust install instruction and link to rustup (e.g.,
mention "Install Rust via rustup: https://rustup.rs" and include
platform-specific commands such as running the rustup installer and setting the
default toolchain), and note that `cargo` is installed with rustup so first-time
contributors can run `cargo xtask` without extra setup — update the text near
the "Rust toolchain - for the workspace xtask binary" and the "Installation of
prerequisites" block to reflect this.
| **Usage (wrapper shim):** | ||
|
|
||
| ```bash | ||
| # Run the shim script (delegates to `cargo xtask ci-baseline`) | ||
| ./measure-ci-baseline.sh | ||
| ``` |
There was a problem hiding this comment.
Make the shim invocation explicit from the repo root.
./measure-ci-baseline.sh only works after cd .ci/scripts, while the surrounding examples read like repo-root commands. Use ./.ci/scripts/measure-ci-baseline.sh or state the required working directory explicitly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.ci/scripts/README.md around lines 62 - 67, Update the README usage example
so the shim is invoked explicitly from the repository root: replace the
ambiguous "./measure-ci-baseline.sh" with "./.ci/scripts/measure-ci-baseline.sh"
(or alternatively add a short note stating "run from .ci/scripts" before the
example). Ensure the example in the "Usage (wrapper shim)" section shows the
explicit path "./.ci/scripts/measure-ci-baseline.sh" so readers can run the
command from the repo root.
| ```bash | ||
| pr=<number> | ||
| gh pr merge "$pr" --squash \ | ||
| --subject "feat(lsp): ... " \ | ||
| --body "PR summary: | ||
| - ... | ||
| ``` |
There was a problem hiding this comment.
❓ Verification inconclusive
Script executed:
#!/bin/bash
# Description: Extract the documented merge example and syntax-check it.
# Expected: `bash -n` should exit 0 once the unterminated quote is fixed.
tmp="$(mktemp)"
{
printf '#!/usr/bin/env bash\n'
awk '
/pr=<number>/ {capture=1}
capture && /^```/ {capture=0; exit}
capture {print}
' CONTRIBUTING.md
} > "$tmp"
printf 'Snippet under test:\n'
cat "$tmp"
printf '\nSyntax check:\n'
bash -n "$tmp"
status=$?
rm -f "$tmp"
exit "$status"Repository: EffortlessMetrics/perl-lsp
Repository: EffortlessMetrics/perl-lsp
Exit code: 2
stdout:
Snippet under test:
#!/usr/bin/env bash
pr=<number>
gh pr merge "$pr" --squash \
--subject "feat(lsp): ... " \
--body "PR summary:
- ...
Syntax check:
stderr:
/tmp/tmp.a0pQbMbIjw: line 2: syntax error near unexpected token `newline'
/tmp/tmp.a0pQbMbIjw: line 2: `pr=<number>'
Script executed:
sed -n '100,115p' CONTRIBUTING.mdRepository: EffortlessMetrics/perl-lsp
Repository: EffortlessMetrics/perl-lsp
Exit code: 0
stdout:
Once checks are green and reviews are complete, use an explicit conventional commit
subject when squashing so history is release-friendly and changelog-friendly.
```bash
pr=<number>
gh pr merge "$pr" --squash \
--subject "feat(lsp): ... " \
--body "PR summary:
- ...
```
Conventional subject format:
- `feat(scope): imperative summary`
- `fix(scope): imperative summary`
- `chore(scope): imperative summary`
Close the --body quote and quote the placeholder <number>.
The example has two syntax issues that prevent it from being copy-pastable as Bash:
- Line 104:
pr=<number>uses unquoted<and>, which Bash interprets as redirection operators. - Line 107:
--body "PR summary:opens a double-quoted string that is never closed before the code fence ends.
📝 Suggested fix
-pr=<number>
+pr="<number>"
gh pr merge "$pr" --squash \
--subject "feat(lsp): ... " \
--body "PR summary:
-- ...
+- ..."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CONTRIBUTING.md` around lines 103 - 109, Update the Bash example so the
placeholder and the multi-line body string are properly quoted: change the pr
assignment to quote the placeholder (e.g., pr="<number>") to avoid shell
redirection, and close the open double-quote for the --body value passed to gh
pr merge (ensure the multi-line body string is terminated before the code fence
ends); adjust the example around the pr variable and the gh pr merge --body
argument to be a valid, copy-pastable Bash snippet.
| ```bash | ||
| # Recommended: run both workflow steps through gh automation. | ||
| # Canonical command for RC orchestration: | ||
| ./scripts/release-turnkey-pr.sh <0.x.y> | ||
| cargo xtask release-turnkey <0.x.y> | ||
| ``` | ||
|
|
||
| ### Alternative Manual Release Process | ||
|
|
||
| ```bash | ||
| # 1. Generate changelog content | ||
| # Use the same canonical flow entrypoint: | ||
| ./scripts/release-turnkey-pr.sh <0.x.y> --no-auto-merge --no-wait-release | ||
| cargo xtask release-turnkey <0.x.y> --no-auto-merge --no-wait-release | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check file size and read the specific lines mentioned
wc -l docs/CHANGELOG_WORKFLOW.mdRepository: EffortlessMetrics/perl-lsp
Length of output: 100
🏁 Script executed:
# Read the exact lines mentioned in the review (182-194)
sed -n '182,194p' docs/CHANGELOG_WORKFLOW.mdRepository: EffortlessMetrics/perl-lsp
Length of output: 417
🏁 Script executed:
# Search for all occurrences of <0.x.y> pattern in the file to assess scope
rg '<0\.x\.y>' docs/CHANGELOG_WORKFLOW.md -nRepository: EffortlessMetrics/perl-lsp
Length of output: 534
Quote the version placeholder in these Bash commands.
The <0.x.y> syntax is interpreted by Bash as input redirection. Users copying these commands literally will encounter "No such file or directory" errors. Use quotes around the placeholder instead.
Suggested fix
-cargo xtask release-turnkey <0.x.y>
+cargo xtask release-turnkey "0.x.y"
@@
-cargo xtask release-turnkey <0.x.y> --no-auto-merge --no-wait-release
+cargo xtask release-turnkey "0.x.y" --no-auto-merge --no-wait-release🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/CHANGELOG_WORKFLOW.md` around lines 182 - 194, The bash examples use an
unquoted version placeholder that Bash treats as input redirection; update both
occurrences of the command string "cargo xtask release-turnkey <0.x.y>" in the
docs so the placeholder is quoted (e.g., wrap the <0.x.y> token in single or
double quotes) in the canonical command and the alternative manual release
example to prevent shell redirection errors.
| | Category | Count | Total Lines | | ||
| |----------|------:|------------:| | ||
| | **Replaced** by xtask | 29 | ~3,800 | | ||
| | **Partially** replaced | 8 | ~1,600 | | ||
| | **Not Started** | 31 | ~9,500 | | ||
| | **Keep** as shell | 24 | ~1,400 | | ||
| | **Total scripts** | 92 | ~16,300 | | ||
| | **Replaced** by xtask | 83 | ~14,150 | | ||
| | **Partially** replaced | 4 | ~689 | | ||
| | **Not Started** | 2 | ~1,128 | | ||
| | **Keep** as shell | 12 | ~1,367 | | ||
| | **Total scripts** | 101 | ~16,334 | |
There was a problem hiding this comment.
Recompute the migration totals before landing this doc.
The summary now says there are 2 “Not Started” scripts, but this file only lists one remaining item (scripts/forensics/lib_gh.sh). These hardcoded totals will drift quickly; please regenerate them in this change or replace them with non-numeric wording.
As per coding guidelines, "docs/**/*.md: Do not hardcode workspace counts, release numbers, or performance metrics in new documentation unless you regenerated or verified them in the same change."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/project/XTASK_MIGRATION.md` around lines 223 - 229, The migration totals
table in XTASK_MIGRATION.md is out of sync (the "Not Started" count shows 2 but
only scripts/forensics/lib_gh.sh is listed); update the hardcoded counts (e.g.,
the "Not Started", "Total scripts", and "Total Lines" row values) to match the
regenerated totals for the current file list OR replace the numeric totals with
non-numeric wording. Locate the table entries shown (Category/Count/Total Lines)
and either run the same script you used to produce these metrics to recompute
and replace the numbers, or change the wording to avoid hardcoded metrics.
| let status = Command::new("gh") | ||
| .current_dir(&root) | ||
| .args([ | ||
| "workflow", | ||
| "run", | ||
| "Publish to crates.io", | ||
| "--ref", | ||
| &ref_name, | ||
| "-f", | ||
| &format!("version={version}"), | ||
| "-f", | ||
| &format!("dry_run={dry_run}"), | ||
| ]) | ||
| .status()?; | ||
|
|
||
| if !status.success() { | ||
| bail!("publish-release workflow dispatch failed"); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add error context for consistency and improve error message.
The Command::new("gh")...status()? call lacks .context() wrapping, unlike similar patterns in other xtask modules (e.g., ci_hygiene.rs). Additionally, the error message on failure doesn't include the version or ref that failed.
♻️ Proposed fix
let status = Command::new("gh")
.current_dir(&root)
.args([
"workflow",
"run",
"Publish to crates.io",
"--ref",
&ref_name,
"-f",
&format!("version={version}"),
"-f",
&format!("dry_run={dry_run}"),
])
- .status()?;
+ .status()
+ .context("Failed to dispatch workflow via gh CLI")?;
if !status.success() {
- bail!("publish-release workflow dispatch failed");
+ bail!("publish-release workflow dispatch failed for version {version} on ref {ref_name}");
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let status = Command::new("gh") | |
| .current_dir(&root) | |
| .args([ | |
| "workflow", | |
| "run", | |
| "Publish to crates.io", | |
| "--ref", | |
| &ref_name, | |
| "-f", | |
| &format!("version={version}"), | |
| "-f", | |
| &format!("dry_run={dry_run}"), | |
| ]) | |
| .status()?; | |
| if !status.success() { | |
| bail!("publish-release workflow dispatch failed"); | |
| } | |
| let status = Command::new("gh") | |
| .current_dir(&root) | |
| .args([ | |
| "workflow", | |
| "run", | |
| "Publish to crates.io", | |
| "--ref", | |
| &ref_name, | |
| "-f", | |
| &format!("version={version}"), | |
| "-f", | |
| &format!("dry_run={dry_run}"), | |
| ]) | |
| .status() | |
| .context("Failed to dispatch workflow via gh CLI")?; | |
| if !status.success() { | |
| bail!("publish-release workflow dispatch failed for version {version} on ref {ref_name}"); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/publish.rs` around lines 210 - 227, Wrap the
Command::new("gh")...status()? call with anyhow::Context to provide a
descriptive error when the command fails to spawn/execute (e.g., replace the
trailing .status()? with .status().context(format!("failed to run `gh workflow
run Publish to crates.io` for version {} on ref {}", version, ref_name))?); also
update the subsequent bail! to include version and ref_name (e.g., change
bail!("publish-release workflow dispatch failed"); to bail!("publish-release
workflow dispatch failed for version {} on ref {}", version, ref_name)) so both
execution errors and non-success statuses carry the version/ref context.
| pub fn smoke_test_release(version: String) -> Result<()> { | ||
| let root = project_root()?; | ||
| let script = root.join("scripts").join("smoke-test-release.sh"); | ||
| let status = Command::new("bash") | ||
| .arg(script) | ||
| .arg(version) | ||
| .current_dir(&root) | ||
| .env("XTASK_SMOKE_TEST_RELEASE", "1") | ||
| .status()?; | ||
|
|
||
| if !status.success() { | ||
| bail!("smoke-test-release failed"); | ||
| } | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add error context for the bash command.
For consistency with other xtask modules and better error diagnostics, wrap the status() call with context.
♻️ Proposed fix
+use color_eyre::eyre::{Context, Result, bail, eyre};
+
pub fn smoke_test_release(version: String) -> Result<()> {
let root = project_root()?;
let script = root.join("scripts").join("smoke-test-release.sh");
let status = Command::new("bash")
.arg(script)
.arg(version)
.current_dir(&root)
.env("XTASK_SMOKE_TEST_RELEASE", "1")
- .status()?;
+ .status()
+ .context("Failed to execute smoke-test-release.sh")?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/publish.rs` around lines 233 - 248, The call to
Command::new(...).status() in smoke_test_release lacks contextual error
information; wrap the status() call with anyhow::Context (e.g., .with_context or
.context) to add a clear message including the script and version (referencing
smoke_test_release and the Command::new(...) invocation) so failures show
"running smoke-test-release.sh with version <version>" (also add use
anyhow::Context if not already imported).
| *by_event.entry(event.clone().into_owned()).or_insert(0) += 1; | ||
| *by_agent.entry(agent_type.clone().into_owned()).or_insert(0) += 1; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use or_default() instead of or_insert(0).
Per coding guidelines, prefer or_default() for default initialization of numeric types.
♻️ Proposed fix
- *by_event.entry(event.clone().into_owned()).or_insert(0) += 1;
- *by_agent.entry(agent_type.clone().into_owned()).or_insert(0) += 1;
+ *by_event.entry(event.clone().into_owned()).or_default() += 1;
+ *by_agent.entry(agent_type.clone().into_owned()).or_default() += 1;As per coding guidelines: "Use or_default() instead of or_insert_with(Vec::new) for default initialization"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| *by_event.entry(event.clone().into_owned()).or_insert(0) += 1; | |
| *by_agent.entry(agent_type.clone().into_owned()).or_insert(0) += 1; | |
| *by_event.entry(event.clone().into_owned()).or_default() += 1; | |
| *by_agent.entry(agent_type.clone().into_owned()).or_default() += 1; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/swarm_summary.rs` around lines 47 - 48, Replace the explicit
zero initialization with the default initializer: change both uses of
.or_insert(0) on the HashMap entries (the calls on
by_event.entry(event.clone().into_owned()) and
by_agent.entry(agent_type.clone().into_owned())) to .or_default() so the counts
are initialized via Default instead of hardcoding 0.
| fn check_member_dependencies(root: &Path) -> Result<()> { | ||
| println!("✓ Checking for dependencies on excluded crates..."); | ||
|
|
||
| let excluded = excluded_set(); | ||
| let crate_pattern = format!( | ||
| r"(?m)^\s*({})\s*=", | ||
| excluded.iter().map(|entry| regex::escape(entry)).collect::<Vec<_>>().join("|") | ||
| ); | ||
| let exclusion_re = Regex::new(&crate_pattern).context("Failed to compile dependency regex")?; | ||
|
|
||
| let mut offenders = Vec::new(); | ||
| let crates_dir = root.join("crates"); | ||
| for entry in fs::read_dir(&crates_dir).context("Unable to list crates directory")? { | ||
| let entry = entry?; | ||
| if !entry.file_type()?.is_dir() { | ||
| continue; | ||
| } | ||
|
|
||
| let manifest = entry.path().join("Cargo.toml"); | ||
| if !manifest.exists() { | ||
| continue; | ||
| } | ||
|
|
||
| let crate_name = | ||
| entry.file_name().into_string().unwrap_or_else(|_| String::from("<invalid>")); | ||
| let content = fs::read_to_string(&manifest) | ||
| .with_context(|| format!("Failed to read {}", manifest.display()))?; | ||
|
|
||
| if has_excluded_dependency_reference(&content, &exclusion_re, &excluded) { | ||
| offenders.push(crate_name); | ||
| } | ||
| } | ||
|
|
||
| if !offenders.is_empty() { | ||
| bail!("❌ ERROR: Dependencies on excluded crates found in: {}", offenders.join(", ")); | ||
| } | ||
|
|
||
| println!(" No workspace members depend on excluded crates"); | ||
| println!(); | ||
| Ok(()) | ||
| } | ||
|
|
||
| fn has_excluded_dependency_reference( | ||
| manifest_content: &str, | ||
| pattern: &Regex, | ||
| excluded: &HashSet<&str>, | ||
| ) -> bool { | ||
| for line in manifest_content.lines() { | ||
| let trimmed = line.trim(); | ||
| if trimmed.is_empty() || trimmed.starts_with('#') { | ||
| continue; | ||
| } | ||
|
|
||
| if pattern.is_match(trimmed) { | ||
| if let Some(capture) = pattern.captures(trimmed).and_then(|c| c.get(1)) { | ||
| if excluded.contains(capture.as_str()) { | ||
| return true; | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| false | ||
| } |
There was a problem hiding this comment.
Parse member manifests instead of grepping dependency lines.
The regex scan misses valid Cargo forms such as renamed deps (foo = { package = "tree-sitter-perl-c" }) and table-style declarations ([dependencies.foo]). That leaves false negatives in the invariant this task is supposed to enforce. Parse each member Cargo.toml and inspect the dependency tables directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/validate_workspace_exclusions.rs` around lines 169 - 232, The
current regex-based check in check_member_dependencies /
has_excluded_dependency_reference misses table-style and renamed dependencies;
replace the line-oriented grep with proper TOML parsing: parse each
manifest_content into a toml::Value (or use cargo_toml), then inspect standard
dependency tables ("dependencies", "dev-dependencies", "build-dependencies") and
nested "target" tables, checking both dependency keys (e.g., "foo" from
[dependencies.foo]) and any "package" renames (e.g., { package =
"tree-sitter-perl-c" }) against the excluded set returned by excluded_set();
update has_excluded_dependency_reference to return true when either the
dependency key or its package field matches an excluded crate. Ensure
check_member_dependencies still reads manifests and uses the updated predicate.
| //! Worktree maintenance helpers used by local agent operations. | ||
|
|
||
| use crate::utils::project_root; | ||
| use color_eyre::eyre::{Result, bail, eyre}; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unused import: eyre.
The eyre macro is imported but not used in this module. Only bail is used for error creation.
🧹 Remove unused import
-use color_eyre::eyre::{Result, bail, eyre};
+use color_eyre::eyre::{Result, bail};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use color_eyre::eyre::{Result, bail, eyre}; | |
| use color_eyre::eyre::{Result, bail}; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/worktrees.rs` at line 4, The import list includes an unused
symbol `eyre` (the line importing color_eyre::eyre::{Result, bail, eyre});
remove `eyre` from that use declaration so only `Result` and `bail` are
imported, or if you intended to use the `eyre!` macro replace its usage
accordingly—update the import in the module containing the worktree helpers (the
use statement referencing color_eyre::eyre) to eliminate the unused `eyre`
symbol.
There was a problem hiding this comment.
Actionable comments posted: 31
♻️ Duplicate comments (21)
scripts/devex-doctor.sh (1)
8-8:⚠️ Potential issue | 🟠 MajorForward caller arguments here.
Line 8 discards all flags passed to the script. Append
"$@"to preserve the CLI surface.💡 Proposed fix
-exec cargo xtask devex-doctor +exec cargo xtask devex-doctor "$@",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/devex-doctor.sh` at line 8, The script currently calls exec cargo xtask devex-doctor without forwarding caller arguments, which drops any flags; update the final exec invocation in scripts/devex-doctor.sh (the line invoking exec cargo xtask devex-doctor) to forward all positional parameters by appending "$@" so that any flags or args passed to the script are forwarded to the cargo xtask devex-doctor command.scripts/preflight.sh (1)
1-4:⚠️ Potential issue | 🟠 MajorRestore repo-root resolution for location independence.
This shim lost the previous
REPO_ROOT/--manifest-pathlogic and now depends on the caller's CWD. Legacy automation invokingscripts/preflight.shby path from outside the checkout will break.💡 Proposed fix
#!/usr/bin/env bash set -euo pipefail +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" + +cd "$REPO_ROOT" exec cargo xtask ci-hygiene preflight "$@",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/preflight.sh` around lines 1 - 4, The script currently calls "cargo xtask ci-hygiene preflight" from the caller CWD which breaks invocation by path; restore repo-root resolution by determining the script location (e.g., compute REPO_ROOT from the script's directory) and either cd to that REPO_ROOT before running or pass "--manifest-path $REPO_ROOT/Cargo.toml" to the "cargo xtask ci-hygiene preflight" invocation so the xtask runs relative to the repository root rather than the caller's CWD.scripts/test-capped.sh (1)
1-4:⚠️ Potential issue | 🟠 MajorRestore repo-root resolution for location independence.
Same regression as
preflight.sh—the wrapper no longer resolves the repo root, breaking legacy call sites that invoke the script by path from outside the checkout.💡 Proposed fix
#!/usr/bin/env bash set -euo pipefail +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" + +cd "$REPO_ROOT" exec cargo xtask ci-hygiene test-capped "$@",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test-capped.sh` around lines 1 - 4, The script test-capped.sh no longer resolves the repository root, breaking callers that run the script by path; update the script so it determines its repo-root relative to the script file (using BASH_SOURCE/dirname logic used in preflight.sh) and cd to that repo root before running exec cargo xtask ci-hygiene test-capped "$@"; ensure the resolution is location-independent and then call exec from the repo root so legacy invocation paths work.xtask/src/tasks/populate_book.rs (1)
12-16: 🧹 Nitpick | 🔵 TrivialRemove unnecessary
.clone()onscript.
Command::arg()acceptsimpl AsRef<OsStr>, so thePathBufcan be passed by reference.Proposed fix
let status = Command::new("bash") .current_dir(&root) - .arg(script.clone()) + .arg(&script) .status() .context("failed to execute scripts/populate-book.sh")?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/populate_book.rs` around lines 12 - 16, Remove the unnecessary clone of the PathBuf before passing it to Command::arg: change Command::arg(script.clone()) to pass a reference (e.g., Command::arg(&script) or Command::arg(script.as_ref())) so that the PathBuf is not cloned; this affects the call that builds the process (the Command::new("bash") ... .arg(...) invocation where the local variable script is used).scripts/test_iterative_parser.sh (1)
1-4:⚠️ Potential issue | 🟠 MajorMissing
set -euo pipefailand repo-root resolution.This shim lacks the error-handling preamble and repo-root anchoring present in peer scripts (e.g.,
cleanup-worktrees.sh,prepare-release.sh). The script will fail when invoked from outside the workspace root.Proposed fix
#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" +cd "$REPO_ROOT" exec cargo xtask ci-hygiene test-iterative-parser "$@"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test_iterative_parser.sh` around lines 1 - 4, The script scripts/test_iterative_parser.sh is missing the standard preamble and repo-root anchoring; add the error-handling header (set -euo pipefail) if not present and change execution to resolve and cd to the repository root before running exec cargo xtask ci-hygiene test-iterative-parser "$@". Ensure you locate the existing exec line (exec cargo xtask ci-hygiene test-iterative-parser "$@") and prepend logic to compute the script directory/repo root (e.g., via dirname/realpath of "$0") and cd there, then keep the exec invocation unchanged.scripts/build-timing-receipt.sh (1)
1-3:⚠️ Potential issue | 🟠 MajorMissing error handling and repo-root resolution.
This shim lacks
set -euo pipefailand repo-root anchoring, unlike peer scripts (cleanup-worktrees.sh,prepare-release.sh). It will fail when invoked from outside the workspace root.Proposed fix
#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" +cd "$REPO_ROOT" exec cargo xtask build-timing-receipt "$@"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build-timing-receipt.sh` around lines 1 - 3, The script currently just execs cargo xtask and lacks robust error handling and workspace-root anchoring; update the top of the script used by exec cargo xtask build-timing-receipt to enable strict bash safety (set -euo pipefail) and resolve the repository root before running cargo (e.g., use git rev-parse --show-toplevel or a dirname-based anchor) so the script always runs from the workspace root and fails fast on errors..ci/scripts/README.md (2)
22-38:⚠️ Potential issue | 🟡 MinorAdd Rust installation instructions.
The prerequisites now list Rust/cargo, but the installation section only covers
gh. First-time contributors need guidance on installing Rust (e.g., link to https://rustup.rs).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.ci/scripts/README.md around lines 22 - 38, Update the Prerequisites/Installation section of .ci/scripts/README.md to include Rust installation instructions: mention rustup as the recommended installer (link to https://rustup.rs), explain that this installs the Rust toolchain and cargo, and add simple platform-agnostic commands or a one-line curl bootstrap suggestion plus a note to restart the shell; ensure the text references that Rust/cargo are required to build/run the workspace xtask binary and mention the `cargo` and `xtask` terms so contributors can find the instructions.
62-67:⚠️ Potential issue | 🟡 MinorMake shim invocation path explicit.
./measure-ci-baseline.shonly works aftercd .ci/scripts. Use./.ci/scripts/measure-ci-baseline.shfor consistency with repo-root execution, or note the required working directory.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.ci/scripts/README.md around lines 62 - 67, Update the Usage example to use an explicit repo-root path for the shim: replace the invocation `./measure-ci-baseline.sh` with `./.ci/scripts/measure-ci-baseline.sh` (or add a short note that you must run the command from the `.ci/scripts` directory) so readers can execute `measure-ci-baseline.sh` from the repository root; reference the script name `measure-ci-baseline.sh` in the README update.scripts/validate-workspace-exclusions.sh (1)
1-3:⚠️ Potential issue | 🟠 MajorMissing error handling and repo-root resolution.
This shim lacks
set -euo pipefailand repo-root anchoring. It will fail when invoked from outside the workspace root.Proposed fix
#!/usr/bin/env bash +set -euo pipefail +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" +cd "$REPO_ROOT" + -cargo xtask validate-workspace-exclusions "$@" +exec cargo xtask validate-workspace-exclusions "$@"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/validate-workspace-exclusions.sh` around lines 1 - 3, The script lacks robust error handling and won't work when run outside the repo root; modify the top of the script to enable safe Bash flags (set -euo pipefail) and resolve the repository root (e.g., repo_root="$(git rev-parse --show-toplevel 2>/dev/null || cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)") then cd "$repo_root" before invoking cargo xtask validate-workspace-exclusions "$@"; keep forwarding "$@" unchanged and ensure any git-failure path prints a clear error and exits non-zero.scripts/ci-audit-workflows.py (1)
5-11:⚠️ Potential issue | 🟡 MinorForward CLI args through the compatibility shim.
The wrapper currently drops
sys.argv[1:], so--helpand any future xtask flags never reach the Rust task.Suggested fix
import subprocess +import sys from pathlib import Path if __name__ == "__main__": repo_root = Path(__file__).resolve().parents[1] - raise SystemExit(subprocess.call(["cargo", "xtask", "ci-audit-workflows"], cwd=repo_root)) + raise SystemExit( + subprocess.call( + ["cargo", "xtask", "ci-audit-workflows", *sys.argv[1:]], + cwd=repo_root, + ) + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/ci-audit-workflows.py` around lines 5 - 11, The wrapper script in the __main__ block currently invokes subprocess.call(["cargo", "xtask", "ci-audit-workflows"], cwd=repo_root) and drops any CLI args; modify the call to forward sys.argv[1:] so flags like --help reach the Rust xtask: import sys, build the argv list starting with ["cargo", "xtask", "ci-audit-workflows"] + sys.argv[1:], and pass that list to subprocess.call (leave repo_root and the existing exit behavior unchanged).scripts/compare-build-timing.sh (1)
1-3:⚠️ Potential issue | 🟠 MajorRestore repo-root normalization before dispatching to xtask.
Line 3 inherits the caller's working directory. From outside the workspace,
cargo xtaskwill not resolve the repo-local setup, so this compatibility shim stops working.Suggested fix
#!/usr/bin/env bash + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" +cd "$REPO_ROOT" exec cargo xtask compare-build-timing "$@"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/compare-build-timing.sh` around lines 1 - 3, The script currently execs `cargo xtask compare-build-timing "$@"` from the caller's CWD so repo-local resolution can fail; before the `exec cargo xtask compare-build-timing "$@"` line, change the script to normalize to the repository root (e.g., compute the script's directory and cd to the repo root) so `cargo xtask` runs from the workspace root; update the file `scripts/compare-build-timing.sh` so that it performs this repo-root cd step prior to calling `exec cargo xtask compare-build-timing "$@"`.xtask/src/tasks/ci_policy.rs (2)
40-41: 🛠️ Refactor suggestion | 🟠 MajorFollow the repo's regex-initialization rule here.
These initializers currently use
?, which violates the repo convention for graceful degradation. Please switch them toOption<Regex>+.ok()and handle theNonecase in the filter path. As per coding guidelines, "UseOption<Regex>with.ok()for graceful degradation in regex initialization".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/ci_policy.rs` around lines 40 - 41, Replace the fallible Regex::new(...) usage for FROM_RAW_PATTERN and ALLOWED_FROM_RAW_PATTERN with Option<Regex> so initialization becomes let disallow_re = Regex::new(FROM_RAW_PATTERN).ok() and let allowed_re = Regex::new(ALLOWED_FROM_RAW_PATTERN).ok(); then update the filtering logic (the code path that uses disallow_re and allowed_re to accept/reject items) to handle None by treating missing regex as "no-op" (i.e., skip that pattern check) so the code gracefully degrades when a regex fails to compile.
30-35:⚠️ Potential issue | 🟡 MinorHandle signal-terminated
git grepexplicitly.If
git grepis interrupted,status.code()isNoneand this logic currently falls through as if the search succeeded. Treat that as an error so an interrupted policy check cannot report a false pass.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/ci_policy.rs` around lines 30 - 35, The current check uses if let Some(code) = output.status.code() { ... } which ignores the None case when the process was terminated by a signal; change this to explicitly handle the None branch and treat it as an error: match on output.status.code() and for None bail with a clear message (e.g. "git grep terminated by signal") including the stderr (String::from_utf8_lossy(&output.stderr).trim()) and still keep the existing bail for Some(code) when code > 1; update the block around output.status.code(), output.stderr and bail!() in ci_policy.rs accordingly.scripts/update-current-status.py (1)
10-14:⚠️ Potential issue | 🟠 MajorUse exec-style delegation here too.
Same signal-propagation regression as the other Python shims:
SystemExit(subprocess.call(...))converts child signal exits into wrapper exit codes. Switch this shim toos.execvafter resolvingcargoand changing intorepo_root.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/update-current-status.py` around lines 10 - 14, The shim in the __main__ block currently uses subprocess.call (wrapping it in SystemExit), which loses signal semantics; change it to exec-style delegation: resolve the path to the cargo binary, chdir to repo_root, and replace the current process with os.execv or os.execve to run ["cargo", "xtask", "update-status", *sys.argv[1:]] so signals from the child propagate correctly; update the __main__ code to use repo_root (already computed) and os.execv with the resolved cargo path and argv list instead of subprocess.call/SystemExit.xtask/src/tasks/publish.rs (1)
210-227: 🧹 Nitpick | 🔵 TrivialAdd context around both release subprocesses.
Lines 223 and 241 currently surface bare spawn errors, and the non-zero branches omit the version/ref/script details that would make release failures diagnosable in CI logs. Wrap both
status()calls withContextand include the relevantversion,ref_name, or script path in the follow-upbail!.Also applies to: 233-245
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/publish.rs` around lines 210 - 227, Wrap both subprocess .status() calls with anyhow::Context so spawn/IO errors include context and make the non-success branches include the relevant identifiers; for the GitHub dispatch (the Command::new("gh") call that sets args including "Publish to crates.io", ref_name, version and dry_run) call .status().context("dispatching GitHub workflow 'Publish to crates.io'")? and change the bail! to include version and ref_name (e.g. bail!("publish-release workflow dispatch failed for version={} ref={}", version, ref_name)); do the same for the other subprocess status() (lines 233-245) by adding a Context message naming the script/command and changing its bail! to include the script path or command and version so CI logs contain the exact command and version that failed.xtask/src/tasks/hardening.rs (1)
9-12:⚠️ Potential issue | 🟠 MajorRun these scripts from
project_root().Line 28 inherits the caller's cwd even though each entrypoint already computes
root. Runningcargo xtask ...from a workspace subdirectory will execute the script from that subdir, so repo-relative paths inside the shell scripts can resolve incorrectly.🔧 Suggested fix
pub fn security_hardening() -> Result<()> { let root = project_root()?; let script = root.join("scripts").join("security-hardening.sh"); - run_script(&script, &[]) + run_script(&root, &script, &[]) } pub fn performance_hardening() -> Result<()> { let root = project_root()?; let script = root.join("scripts").join("performance-hardening.sh"); - run_script(&script, &[]) + run_script(&root, &script, &[]) } pub fn production_gates_validation() -> Result<()> { let root = project_root()?; let script = root.join("scripts").join("production-gates-validation.sh"); - run_script(&script, &[]) + run_script(&root, &script, &[]) } -fn run_script(script: &Path, args: &[&str]) -> Result<()> { +fn run_script(root: &Path, script: &Path, args: &[&str]) -> Result<()> { let status = Command::new("bash") + .current_dir(root) .arg(script) .args(args) .status()Also applies to: 15-18, 21-24, 27-32
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/hardening.rs` around lines 9 - 12, The scripts are being executed with the caller's cwd, so update execution to use project_root() as the working directory: in security_hardening() (and the analogous functions that call run_script with &script, &[]), pass or apply the root path as the command's current_dir; either change run_script to accept a cwd parameter (e.g. run_script(&script, &[], &root)) or ensure run_script uses .current_dir(root) when spawning the process so the shell scripts always run from project_root() instead of the caller's cwd.xtask/src/tasks/ci_audit_workflows.rs (1)
124-149: 🛠️ Refactor suggestion | 🟠 MajorConvert these tests to the repo's
Result<()>pattern.Lines 135, 141, and 148 still use
.expect(). Please returnResult<()>from these tests and use?so parse failures stay within the preferred test style.As per coding guidelines, "In tests, prefer `Result<()>` and helpers such as `perl_tdd_support::must` and `must_some` instead of panicking assertions".♻️ Suggested fix
#[test] - fn pr_trigger_matches_map_form() { + fn pr_trigger_matches_map_form() -> Result<()> { let workflow = serde_yaml_ng::from_str( r#"on: pull_request: types: [opened, synchronize] jobs: check: if: github.event_name == 'pull_request' "#, - ) - .expect("valid yaml"); + )?; assert!(has_pr_trigger(&workflow)); + Ok(()) } #[test] - fn pr_trigger_matches_list_form() { - let workflow = serde_yaml_ng::from_str(r#"on: [push, pull_request]"#).expect("valid yaml"); + fn pr_trigger_matches_list_form() -> Result<()> { + let workflow = serde_yaml_ng::from_str(r#"on: [push, pull_request]"#)?; assert!(has_pr_trigger(&workflow)); + Ok(()) } #[test] - fn pr_trigger_rejects_non_pr_trigger() { - let workflow = - serde_yaml_ng::from_str(r#"on: [push, workflow_dispatch]"#).expect("valid yaml"); + fn pr_trigger_rejects_non_pr_trigger() -> Result<()> { + let workflow = serde_yaml_ng::from_str(r#"on: [push, workflow_dispatch]"#)?; assert!(!has_pr_trigger(&workflow)); + Ok(()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/ci_audit_workflows.rs` around lines 124 - 149, Change the three test functions (pr_trigger_matches_map_form, pr_trigger_matches_list_form, pr_trigger_rejects_non_pr_trigger) to return Result<()> and replace the .expect(...) calls on serde_yaml_ng::from_str with the ? operator so parse errors propagate instead of panicking; keep the assert! checks against has_pr_trigger(&workflow) and finish each test with Ok(()) to satisfy the Result<()> signature.xtask/src/tasks/check_toolchain.rs (1)
37-39:⚠️ Potential issue | 🟠 MajorDon't let
check-toolchainsucceed on an unreadable pin.Returning
Ok(())here meanscargo xtask check-toolchainpasses forstable, typos, or any other unparsabletoolchain.channel, so CI won't catch a brokenrust-toolchain.toml. Onlydoctorshould downgrade this case to a warning.🛠️ Suggested fix
if required_parts.is_empty() { - println!("⚠️ Could not parse pinned toolchain from rust-toolchain.toml"); - return Ok(()); + if doctor { + println!("⚠️ Could not parse pinned toolchain from rust-toolchain.toml"); + return Ok(()); + } + bail!("Could not parse pinned toolchain from rust-toolchain.toml"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/check_toolchain.rs` around lines 37 - 39, The current check in xtask/src/tasks/check_toolchain.rs that returns Ok(()) when required_parts.is_empty() allows unreadable or mistyped toolchain.channel values to pass; change this to return an error so cargo xtask check-toolchain fails (e.g., use anyhow::bail or return Err with a clear message) instead of Ok(()). Locate the check around required_parts.is_empty() in the check_toolchain (or equivalent) function and replace the silent success with an Err that includes context about an unreadable or unparsable toolchain.pin/channel so CI will catch it; keep doctor as the only place that downgrades this to a warning.xtask/src/tasks/validate_workspace_exclusions.rs (1)
177-239:⚠️ Potential issue | 🟠 MajorParse member manifests instead of grepping dependency lines.
The regex only catches inline
name = ...assignments. It misses valid Cargo forms such as[dependencies.foo],[target.'cfg(...)'.dependencies.foo], and renamed deps via{ package = "tree-sitter-perl-c" }, so this validator can pass while a member still depends on an excluded crate. Parse each manifest as TOML and inspect the dependency tables directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/validate_workspace_exclusions.rs` around lines 177 - 239, The current check_member_dependencies function uses a regex in has_excluded_dependency_reference which misses valid Cargo.toml dependency forms (table headers like [dependencies.foo], target-specific tables, and renamed deps via package = "..."); replace the grep approach by parsing each manifest with a TOML parser (e.g., toml::Value or cargo_toml) inside check_member_dependencies and walk the dependency tables (dependencies, dev-dependencies, build-dependencies, and all target.*.dependencies) for each member crate: for each dependency entry (whether a string, inline table, or table under a named header), determine the effective package name by checking the key and, if the value is a table, the package field, then compare that effective name against excluded (same HashSet) and record offenders; remove has_excluded_dependency_reference or refactor it to accept parsed toml::Value and perform the same structured checks.xtask/src/tasks/hook_checks.rs (1)
114-114:⚠️ Potential issue | 🟡 MinorGracefully degrade the timestamp regex.
The regex is compiled with
?which will abort the entire hook test command if compilation fails. Per coding guidelines, useOption<Regex>with.ok()and skip the timestamp assertion when the regex isNone.As per coding guidelines, "Use
Option<Regex>with.ok()for graceful degradation in regex initialization."🛡️ Proposed fix
- let ts_re = Regex::new(r#""ts":"[0-9]{4}-"#)?; + let ts_re = Regex::new(r#""ts":"[0-9]{4}-"#).ok();Then update
assert_regexcall at line 162:- assert_regex(&output, &ts_re, "subagent-stop includes ts timestamp", &mut pass, &mut fail); + if let Some(ref re) = ts_re { + assert_regex(&output, re, "subagent-stop includes ts timestamp", &mut pass, &mut fail); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@xtask/src/tasks/hook_checks.rs` at line 114, Change the strict Regex::new(...) call for ts_re to use Option<Regex> by calling Regex::new(...).ok() so compilation failure doesn't abort the hook; update the variable name (ts_re) where it's declared and make assert_regex invocation conditional—only call assert_regex with ts_re.as_ref() (or skip the timestamp assertion) when the Option is Some—this ensures graceful degradation per guidelines.docs/project/XTASK_MIGRATION.md (1)
223-229:⚠️ Potential issue | 🟡 MinorSummary counts are inconsistent with the listed items.
The summary table shows "Not Started: 2" but only
lib_gh.sh(Line 198) is listed as "Not Started" in the migration tables. Please recalculate and update the totals to match the actual script counts, or replace with non-numeric wording.As per coding guidelines, "docs/**/*.md: Do not hardcode workspace counts, release numbers, or performance metrics in new documentation unless you regenerated or verified them in the same change."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/project/XTASK_MIGRATION.md` around lines 223 - 229, The summary table in XTASK_MIGRATION.md has inconsistent hardcoded counts (e.g., "Not Started: 2" while only lib_gh.sh is listed as Not Started); update the table to either recompute and correct the numeric totals so they match the enumerated items (adjust "Replaced", "Partially replaced", "Not Started", "Keep", and "Total scripts") or replace numeric counts with non-numeric wording (e.g., "see list below") to comply with the docs rule; ensure you reference the "Not Started" row and the listed script lib_gh.sh in your update and remove any other hardcoded workspace/release/performance numbers not regenerated in this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/handoff/swarm-pack/commands/green-merge.md`:
- Around line 26-32: The snippet uses an unsafe placeholder `pr=<number>` which
is interpreted by the shell as input redirection; replace it with a shell-safe
placeholder or example value and reference the variable in the merge command
(e.g., set a quoted variable like PR="<PR_NUMBER>" or PR=123 and then call `gh
pr merge "$PR" --squash ...`) so the `gh pr merge` invocation uses a valid
variable (`PR` or similar) instead of `pr=<number>`.
In `@justfile`:
- Around line 1591-1602: Capture the date once into a shell variable before
using it in the loop to avoid inconsistent expansion across iterations: in the
ci-gate-with-receipts recipe set DATE="$(date +%Y%m%d)" and then use "$${DATE}"
for mkdir and in the cargo xtask receipt-path (and similarly update the
gate-execute gate_id recipe to use "$${DATE}" instead of calling date inline).
Reference the ci-gate-with-receipts and gate-execute (gate_id) recipes and the
loop variable gate when making this change.
In `@scripts/check_features_invariants.py`:
- Around line 9-13: The current shim uses raise SystemExit(subprocess.call(...))
which spawns an extra process and mishandles signals; replace this pattern by
changing the working directory to repo_root (use os.chdir(repo_root)) and then
exec the cargo binary with os.execvp so the shim is replaced by cargo (call
os.execvp("cargo", ["cargo", "xtask", "features", "invariants"])). Update the
block that defines repo_root and the final raise SystemExit(...) to perform
os.chdir(repo_root) followed by os.execvp, removing subprocess.call and
SystemExit usage.
In `@scripts/cleanup-worktrees.sh`:
- Around line 1-8: The script correctly resolves SCRIPT_DIR and REPO_ROOT and
cds into the repo, but it doesn't forward CLI args; update the exec invocation
(the final line invoking cargo xtask worktree-cleanup) to append "$@" so any
arguments passed to the shim are forwarded to cargo xtask; keep SCRIPT_DIR and
REPO_ROOT resolution intact.
In `@scripts/dead-code-check.sh`:
- Around line 11-13: The substring check using [[ "$*" != *"--strict"* ]] can
misfire if any argument contains "strict" as part of it; instead iterate over
the positional parameters to check for the exact flag and only append --strict
if none match. Update the block that uses DEAD_CODE_STRICT and the check for
"--strict" (references: DEAD_CODE_STRICT, --strict, "$*" and "$@") to loop
through "$@" and compare each arg exactly (e.g., eq to "--strict") before
calling set -- "$@" --strict.
In `@scripts/debt-pr-summary.py`:
- Around line 11-21: The function _print_summary_from_json must validate that
the decoded payload is a mapping before accessing nested fields; currently
json.loads can return list/null/number and then payload.get(...) will raise
AttributeError. Fix by checking isinstance(payload, dict) after json.loads and
return nonzero with an error message if not a dict, and also ensure
summary/quarantined_tests/known_issues/technical_debt are treated as dicts
(e.g., use isinstance(summary, dict) or replace with {} when not a mapping)
before calling .get on them so _print_summary_from_json handles non-object JSON
inputs gracefully.
- Around line 37-43: The script currently unconditionally calls sys.stdin.read()
(assigning to raw) which blocks on a TTY; change the control flow so you only
read from stdin when data is actually being piped in (e.g., check
sys.stdin.isatty() and only call sys.stdin.read() when isatty() is False or use
a non-blocking check), then call _print_summary_from_json(raw) only if
non-empty; otherwise fall back to running subprocess.call(["cargo", "xtask",
"debt-report", "--summary"], cwd=repo_root) using the existing repo_root
variable. Ensure you keep the same exit behavior (raise SystemExit with the
return of either _print_summary_from_json or subprocess.call).
In `@scripts/e2e-gate.sh`:
- Line 4: The exec line "exec cargo xtask ci-hygiene e2e-gate \"$@\"" is
unanchored and can pick up the wrong Cargo workspace; fix by anchoring the
workspace before exec — either determine the repository root (e.g.
repo_root="$(git rev-parse --show-toplevel")" and cd "$repo_root" (or the crate
dir) prior to calling exec, or invoke cargo with an explicit manifest path (e.g.
"cargo --manifest-path /path/to/Cargo.toml xtask ci-hygiene e2e-gate \"$@\"") so
the "cargo xtask ci-hygiene e2e-gate" invocation always runs against the
intended workspace.
In `@scripts/forbid-fatal-constructs.sh`:
- Around line 2-7: The script is CWD-dependent and should resolve the repository
root like the other wrappers; update scripts/forbid-fatal-constructs.sh to
compute the repo root (using the script's directory as reference) and change to
that directory before running cargo, keeping the existing safety flags (set -euo
pipefail) and the exec invocation (exec cargo xtask forbid-fatal-constructs --
"$@") so the command runs from the repo root rather than the current working
directory.
In `@scripts/ignored-test-count.sh`:
- Around line 1-4: The script is CWD-dependent; make it location-independent by
resolving the repo root before invoking cargo: compute the script directory
(e.g., via dirname and readlink -f on "$0"), derive the repository root
(matching other shims' logic), cd to that repo root, then exec the existing
command (the current exec of `cargo xtask ci-hygiene ignored-test-count "$@"`)
so cargo runs with the workspace visible; update the top of the script around
the current exec to perform this repo-root resolution and change of directory.
In `@scripts/install-githooks.sh`:
- Around line 1-4: The script is CWD-dependent; make it location-independent by
resolving the repository root (e.g., via git rev-parse --show-toplevel or
deriving it from the script's directory) and switching to that root before
calling exec cargo xtask ci-hygiene install-githooks "$@". Update the top of
scripts/install-githooks.sh to compute REPO_ROOT and either cd "$REPO_ROOT" or
invoke cargo with its working-dir set so exec cargo xtask ci-hygiene
install-githooks "$@" runs from the repo root regardless of current directory.
In `@scripts/publish-receipts.sh`:
- Around line 7-10: The compatibility shim currently only forwards the first
argument ('$1') to cargo xtask in the publish-receipts branch, dropping any
additional args; update the script so all arguments are forwarded (use
"$@")—e.g., replace the conditional/exec that uses "$1" with an exec call to
cargo xtask publish-receipts "$@" (or simply always exec cargo xtask
publish-receipts "$@") to transparently pass through every argument to the
canonical entrypoint.
In `@scripts/run_parser_comparison.sh`:
- Line 4: The shim currently uses an unanchored workspace discovery by simply
exec'ing "cargo xtask ci-hygiene run-parser-comparison \"$@\"", which can
mis-resolve workspace in edge cases; update run_parser_comparison.sh to follow
the anchored pattern used by check-version-sync.sh: resolve the
repository/script directory (e.g., via dirname "$0" and realpath) and invoke the
cargo xtask command from the repo root or supply an explicit manifest path so
workspace discovery is deterministic (keep the command "cargo xtask ci-hygiene
run-parser-comparison" but call it from the resolved repo root or with
--manifest-path).
In `@scripts/test-e2e-capped.sh`:
- Line 4: Add workspace anchoring to this shim to match check-version-sync.sh:
define SCRIPT_DIR and REPO_ROOT (as used in check-version-sync.sh) and run cd
"$REPO_ROOT" before calling exec cargo xtask ci-hygiene test-e2e-capped "$@";
update the script so the entry point (the exec cargo xtask ... invocation)
occurs after the cd to ensure the xtask runs from the repository root.
In `@scripts/update-parser-matrix.py`:
- Around line 10-14: Replace the subprocess wrapper in the __main__ block with
exec-style delegation: resolve the cargo executable (e.g., via shutil.which or
similar) into a variable, change directory to repo_root (the Path resolved in
the block), build the argv list ["cargo", "xtask", "parser-matrix",
*sys.argv[1:]] and call os.execv(cargo_path, argv) so the Python process is
replaced by cargo and signals propagate correctly; remove the raise
SystemExit(subprocess.call(...)) pattern and ensure you raise a clear exception
if cargo cannot be found.
In `@scripts/verify_stacker.sh`:
- Around line 1-4: The script currently runs `cargo xtask ci-hygiene
verify-stacker` relative to the current working directory which breaks when
invoked from outside the repo; modify the top of the `verify_stacker.sh` shim to
resolve the repository root (e.g. via `git rev-parse --show-toplevel` or
locating the script directory with `dirname "$0"` and moving up to the repo
root) and cd into that root before running `exec cargo xtask ci-hygiene
verify-stacker "$@"` so workspace resolution is location-independent.
In `@xtask/src/tasks/build_timing.rs`:
- Around line 78-80: The current logic unconditionally replaces the
user-provided output path when baseline is true (the block that sets output_path
= artifacts.join(TIMING_BASELINE_FILE)); modify this to detect when the user
explicitly provided an output (check the same variable used to hold the parsed
--output, e.g., output or output_opt) and emit a clear warning (via the project
logging facility or eprintln!) that --baseline overrides --output, then keep the
existing behavior (set output_path to artifacts.join(TIMING_BASELINE_FILE)).
Locate the baseline handling and output_path assignment (the block using
baseline and TIMING_BASELINE_FILE) and add the conditional warning before
overwriting output_path. Ensure the message references both --baseline and
--output so users understand the override.
- Around line 446-465: The function run_command_silently currently indexes
command[0] which will panic if the slice is empty; add a defensive check at the
top of run_command_silently to handle an empty command slice (e.g., if
command.is_empty() or command.get(0).is_none()), log or print a warning that no
command was provided, and return a sensible duration (for example 0.0) without
attempting to build or spawn Command; replace direct indexing with command[0]
usages with safe access if needed.
- Around line 414-423: The current touch_file function uses write_all(&[]) which
may not update mtime reliably across filesystems; replace the empty write with a
direct modification time update by calling file.set_modified(SystemTime::now())
on the OpenOptions::new().create(true).append(true).open(path)? result (or use a
mutable File variable if needed), and propagate errors as before; ensure to
import std::time::SystemTime and use the touch_file function and
OpenOptions/File::set_modified symbols when making the change.
In `@xtask/src/tasks/ci_hygiene.rs`:
- Around line 16-32: The local-binary execution branch bypasses Cargo rebuilds
and doesn't set the working directory, causing stale binaries and inconsistent
behavior; update the logic in the block that uses local_binary_path(&root) so
that either it always invokes via cargo (use Command::new("cargo") with args
["run","--quiet","-p", CI_HYGIENE_PACKAGE, "--", &command] to ensure freshness)
or, if keeping the direct binary path, call .current_dir(&root) on the
Command::new(local_binary) invocation and explicitly rebuild before execution
(e.g., run cargo build or cargo run) so the local binary is up-to-date; adjust
the code around the local_binary branch (references: local_binary_path,
CI_HYGIENE_PACKAGE, cargo_command) to make behavior consistent with the cargo
fallback.
In `@xtask/src/tasks/devex_doctor.rs`:
- Around line 11-14: The function run() currently uses env::current_dir() to set
root which can differ when invoked from a subdirectory; replace that call with
crate::utils::project_root() (or call project_root() from the utils module) to
resolve the repository root consistently, then convert its PathBuf result to a
string (e.g., use to_string_lossy() or to_str().ok_or(...) as appropriate) and
preserve the existing error context handling; update any variable name
collisions (root) accordingly.
In `@xtask/src/tasks/forbid_fatal_constructs.rs`:
- Around line 25-38: The local-binary execution path (using
local_binary_path(&root) and Command::new(local_binary) in the
forbid_fatal_constructs task) doesn't set the working directory while the cargo
fallback sets current_dir(root); update the local-binary branch to call
.current_dir(root) on the Command so both branches run with the repository root
as CWD, keeping the rest of the args and status/.context handling unchanged.
In `@xtask/src/tasks/hook_checks.rs`:
- Around line 259-262: The current best-effort cleanup using
fs::remove_dir_all(&temp_root) runs after the bail! path so temp_root can be
left behind on error; either move the cleanup call to run before the
early-return/bail path or add a clear comment explaining intentional retention
for debugging. Locate the temporary directory variable temp_root and the cleanup
invocation fs::remove_dir_all(&temp_root) and ensure it is executed in a
finally-style path (or just before the bail!/Err return) so temporary
directories are removed on failures, or annotate why they must be preserved for
debugging next to the bail! call.
In `@xtask/src/tasks/inject_sha_assets.rs`:
- Around line 44-65: Construct the AssetShaMap once and reuse it instead of
building it twice: create a local variable (e.g., asset_shas) of type
AssetShaMap initialized with the existing fields (mac_arm: &mac_sha_arm,
mac_x64: &mac_sha_x64, lin_arm: &lin_sha_arm, lin_x64: &lin_sha_x64, win_x64:
&win_sha_x64, win_arm: &win_sha_arm) and pass &asset_shas to both
build_brew_formula(...) and build_asset_map(...)? to eliminate duplication and
ensure a single source of truth for the sha map.
In `@xtask/src/tasks/parser_matrix.rs`:
- Line 6: The import list in parser_matrix.rs includes the unused symbol
ContextCompat; remove ContextCompat from the use declaration (the line importing
color_eyre::eyre::{Context, ContextCompat, Result, bail, eyre}) so it becomes
only the actually used symbols (Context, Result, bail, eyre) to eliminate the
unused-import warning.
- Around line 108-116: The function format_location creates an unnecessary
intermediate variable `location` inside the closure returned by
`file.line_number.map(...)`; remove that binding and return the formatted string
directly from the closure by matching on `file.column` and producing either
`format!("line {line_num}:{column}")` or `format!("line {line_num}")` so the
closure yields the string without the extra `location` variable.
In `@xtask/src/tasks/prep_crates_io_launch.rs`:
- Around line 103-110: The generated --config value needs the path quoted for
Cargo's KEY=VALUE parser: update the args.push call that builds the config
string (the format! invoking package.name and rel_dir.display()) to wrap the
path portion in TOML-style quotes (e.g. ...path="...") so the path is passed as
a quoted string; keep using package.name and rel_dir.display() but ensure the
path is enclosed in quotes in the formatted value.
In `@xtask/src/tasks/publish_receipts.rs`:
- Around line 81-86: The call that sets host currently uses
command_output_or_unknown(&root, &["uname", "-a"], "UNVERIFIED") which will fail
on Windows; change the host assignment in publish_receipts.rs to prefer the
uname output when available but fall back to a cross-platform value using
std::env::consts::OS and std::env::consts::ARCH (or combine them) when uname
fails or returns "UNVERIFIED", i.e., update the host expression that sets the
host variable so it attempts command_output_or_unknown(...) and if that equals
"UNVERIFIED" uses format!("{} {}", std::env::consts::OS, std::env::consts::ARCH)
instead (or document Unix-only behavior).
In `@xtask/src/tasks/validate_workspace_exclusions.rs`:
- Around line 103-127: The filter in the workspace_dependencies iterator only
checks the dependency key against excluded_set() and misses aliases where value
is a table with package = "excluded-name"; update the closure used to build
offending (in the block using workspace_dependencies, excluded_set(), and
offending) to also inspect value.as_table().and_then(|t|
t.get("package")).and_then(Value::as_str) and treat the dependency as offending
if either the key name or that nested package string is in excluded_set();
preserve the existing special-case for "tree-sitter-perl" that allows the
specific path "crates/tree-sitter-perl-rs" by checking the package string/path
before returning Some(name.as_str()).
- Around line 307-377: Convert the panic-based tests to the repository test
style by changing their signatures to return Result<()> and using the
perl_tdd_support helpers instead of assert!/assert_eq!: for
missing_required_excluded_paths_ignores_optional_entries and
missing_required_excluded_paths_flags_missing_required_path change fn ...() to
fn ...() -> Result<()> and replace assert!(missing.is_empty()) /
assert_eq!(missing, vec![...]) with the appropriate must()/must_eq()/must_some
helpers (or wrap the result in perl_tdd_support::must to assert non-panics and
compare values via must/equals), and for
excluded_workspace_member_names_flags_members_by_manifest_directory change it to
return Result<()> and use must to assert offending equals the expected vec;
finish each test with Ok(()) so they follow the Result<()> test pattern.
In `@xtask/src/tasks/worktrees.rs`:
- Around line 18-33: The current parsing of git worktree output uses
split_whitespace().next() on the human-readable output (see Command::new("git")
producing list_output and the subsequent list -> stale collection) which
truncates paths containing spaces and can delete the wrong directory; change the
invocation to use machine-readable output (e.g., git worktree list --porcelain
or git worktree list -z) and parse "worktree <path>" records (or null-delimited
entries) to extract the full path strings safely, replacing the
split_whitespace().next() logic used when building stale and the similar parsing
at the deletion site so deletions operate on the complete, untruncated paths
(use the "worktree" token to identify path records).
---
Duplicate comments:
In @.ci/scripts/README.md:
- Around line 22-38: Update the Prerequisites/Installation section of
.ci/scripts/README.md to include Rust installation instructions: mention rustup
as the recommended installer (link to https://rustup.rs), explain that this
installs the Rust toolchain and cargo, and add simple platform-agnostic commands
or a one-line curl bootstrap suggestion plus a note to restart the shell; ensure
the text references that Rust/cargo are required to build/run the workspace
xtask binary and mention the `cargo` and `xtask` terms so contributors can find
the instructions.
- Around line 62-67: Update the Usage example to use an explicit repo-root path
for the shim: replace the invocation `./measure-ci-baseline.sh` with
`./.ci/scripts/measure-ci-baseline.sh` (or add a short note that you must run
the command from the `.ci/scripts` directory) so readers can execute
`measure-ci-baseline.sh` from the repository root; reference the script name
`measure-ci-baseline.sh` in the README update.
In `@docs/project/XTASK_MIGRATION.md`:
- Around line 223-229: The summary table in XTASK_MIGRATION.md has inconsistent
hardcoded counts (e.g., "Not Started: 2" while only lib_gh.sh is listed as Not
Started); update the table to either recompute and correct the numeric totals so
they match the enumerated items (adjust "Replaced", "Partially replaced", "Not
Started", "Keep", and "Total scripts") or replace numeric counts with
non-numeric wording (e.g., "see list below") to comply with the docs rule;
ensure you reference the "Not Started" row and the listed script lib_gh.sh in
your update and remove any other hardcoded workspace/release/performance numbers
not regenerated in this change.
In `@scripts/build-timing-receipt.sh`:
- Around line 1-3: The script currently just execs cargo xtask and lacks robust
error handling and workspace-root anchoring; update the top of the script used
by exec cargo xtask build-timing-receipt to enable strict bash safety (set -euo
pipefail) and resolve the repository root before running cargo (e.g., use git
rev-parse --show-toplevel or a dirname-based anchor) so the script always runs
from the workspace root and fails fast on errors.
In `@scripts/ci-audit-workflows.py`:
- Around line 5-11: The wrapper script in the __main__ block currently invokes
subprocess.call(["cargo", "xtask", "ci-audit-workflows"], cwd=repo_root) and
drops any CLI args; modify the call to forward sys.argv[1:] so flags like --help
reach the Rust xtask: import sys, build the argv list starting with ["cargo",
"xtask", "ci-audit-workflows"] + sys.argv[1:], and pass that list to
subprocess.call (leave repo_root and the existing exit behavior unchanged).
In `@scripts/compare-build-timing.sh`:
- Around line 1-3: The script currently execs `cargo xtask compare-build-timing
"$@"` from the caller's CWD so repo-local resolution can fail; before the `exec
cargo xtask compare-build-timing "$@"` line, change the script to normalize to
the repository root (e.g., compute the script's directory and cd to the repo
root) so `cargo xtask` runs from the workspace root; update the file
`scripts/compare-build-timing.sh` so that it performs this repo-root cd step
prior to calling `exec cargo xtask compare-build-timing "$@"`.
In `@scripts/devex-doctor.sh`:
- Line 8: The script currently calls exec cargo xtask devex-doctor without
forwarding caller arguments, which drops any flags; update the final exec
invocation in scripts/devex-doctor.sh (the line invoking exec cargo xtask
devex-doctor) to forward all positional parameters by appending "$@" so that any
flags or args passed to the script are forwarded to the cargo xtask devex-doctor
command.
In `@scripts/preflight.sh`:
- Around line 1-4: The script currently calls "cargo xtask ci-hygiene preflight"
from the caller CWD which breaks invocation by path; restore repo-root
resolution by determining the script location (e.g., compute REPO_ROOT from the
script's directory) and either cd to that REPO_ROOT before running or pass
"--manifest-path $REPO_ROOT/Cargo.toml" to the "cargo xtask ci-hygiene
preflight" invocation so the xtask runs relative to the repository root rather
than the caller's CWD.
In `@scripts/test_iterative_parser.sh`:
- Around line 1-4: The script scripts/test_iterative_parser.sh is missing the
standard preamble and repo-root anchoring; add the error-handling header (set
-euo pipefail) if not present and change execution to resolve and cd to the
repository root before running exec cargo xtask ci-hygiene test-iterative-parser
"$@". Ensure you locate the existing exec line (exec cargo xtask ci-hygiene
test-iterative-parser "$@") and prepend logic to compute the script
directory/repo root (e.g., via dirname/realpath of "$0") and cd there, then keep
the exec invocation unchanged.
In `@scripts/test-capped.sh`:
- Around line 1-4: The script test-capped.sh no longer resolves the repository
root, breaking callers that run the script by path; update the script so it
determines its repo-root relative to the script file (using BASH_SOURCE/dirname
logic used in preflight.sh) and cd to that repo root before running exec cargo
xtask ci-hygiene test-capped "$@"; ensure the resolution is location-independent
and then call exec from the repo root so legacy invocation paths work.
In `@scripts/update-current-status.py`:
- Around line 10-14: The shim in the __main__ block currently uses
subprocess.call (wrapping it in SystemExit), which loses signal semantics;
change it to exec-style delegation: resolve the path to the cargo binary, chdir
to repo_root, and replace the current process with os.execv or os.execve to run
["cargo", "xtask", "update-status", *sys.argv[1:]] so signals from the child
propagate correctly; update the __main__ code to use repo_root (already
computed) and os.execv with the resolved cargo path and argv list instead of
subprocess.call/SystemExit.
In `@scripts/validate-workspace-exclusions.sh`:
- Around line 1-3: The script lacks robust error handling and won't work when
run outside the repo root; modify the top of the script to enable safe Bash
flags (set -euo pipefail) and resolve the repository root (e.g.,
repo_root="$(git rev-parse --show-toplevel 2>/dev/null || cd "$(dirname
"${BASH_SOURCE[0]}")/.." && pwd)") then cd "$repo_root" before invoking cargo
xtask validate-workspace-exclusions "$@"; keep forwarding "$@" unchanged and
ensure any git-failure path prints a clear error and exits non-zero.
In `@xtask/src/tasks/check_toolchain.rs`:
- Around line 37-39: The current check in xtask/src/tasks/check_toolchain.rs
that returns Ok(()) when required_parts.is_empty() allows unreadable or mistyped
toolchain.channel values to pass; change this to return an error so cargo xtask
check-toolchain fails (e.g., use anyhow::bail or return Err with a clear
message) instead of Ok(()). Locate the check around required_parts.is_empty() in
the check_toolchain (or equivalent) function and replace the silent success with
an Err that includes context about an unreadable or unparsable
toolchain.pin/channel so CI will catch it; keep doctor as the only place that
downgrades this to a warning.
In `@xtask/src/tasks/ci_audit_workflows.rs`:
- Around line 124-149: Change the three test functions
(pr_trigger_matches_map_form, pr_trigger_matches_list_form,
pr_trigger_rejects_non_pr_trigger) to return Result<()> and replace the
.expect(...) calls on serde_yaml_ng::from_str with the ? operator so parse
errors propagate instead of panicking; keep the assert! checks against
has_pr_trigger(&workflow) and finish each test with Ok(()) to satisfy the
Result<()> signature.
In `@xtask/src/tasks/ci_policy.rs`:
- Around line 40-41: Replace the fallible Regex::new(...) usage for
FROM_RAW_PATTERN and ALLOWED_FROM_RAW_PATTERN with Option<Regex> so
initialization becomes let disallow_re = Regex::new(FROM_RAW_PATTERN).ok() and
let allowed_re = Regex::new(ALLOWED_FROM_RAW_PATTERN).ok(); then update the
filtering logic (the code path that uses disallow_re and allowed_re to
accept/reject items) to handle None by treating missing regex as "no-op" (i.e.,
skip that pattern check) so the code gracefully degrades when a regex fails to
compile.
- Around line 30-35: The current check uses if let Some(code) =
output.status.code() { ... } which ignores the None case when the process was
terminated by a signal; change this to explicitly handle the None branch and
treat it as an error: match on output.status.code() and for None bail with a
clear message (e.g. "git grep terminated by signal") including the stderr
(String::from_utf8_lossy(&output.stderr).trim()) and still keep the existing
bail for Some(code) when code > 1; update the block around output.status.code(),
output.stderr and bail!() in ci_policy.rs accordingly.
In `@xtask/src/tasks/hardening.rs`:
- Around line 9-12: The scripts are being executed with the caller's cwd, so
update execution to use project_root() as the working directory: in
security_hardening() (and the analogous functions that call run_script with
&script, &[]), pass or apply the root path as the command's current_dir; either
change run_script to accept a cwd parameter (e.g. run_script(&script, &[],
&root)) or ensure run_script uses .current_dir(root) when spawning the process
so the shell scripts always run from project_root() instead of the caller's cwd.
In `@xtask/src/tasks/hook_checks.rs`:
- Line 114: Change the strict Regex::new(...) call for ts_re to use
Option<Regex> by calling Regex::new(...).ok() so compilation failure doesn't
abort the hook; update the variable name (ts_re) where it's declared and make
assert_regex invocation conditional—only call assert_regex with ts_re.as_ref()
(or skip the timestamp assertion) when the Option is Some—this ensures graceful
degradation per guidelines.
In `@xtask/src/tasks/populate_book.rs`:
- Around line 12-16: Remove the unnecessary clone of the PathBuf before passing
it to Command::arg: change Command::arg(script.clone()) to pass a reference
(e.g., Command::arg(&script) or Command::arg(script.as_ref())) so that the
PathBuf is not cloned; this affects the call that builds the process (the
Command::new("bash") ... .arg(...) invocation where the local variable script is
used).
In `@xtask/src/tasks/publish.rs`:
- Around line 210-227: Wrap both subprocess .status() calls with anyhow::Context
so spawn/IO errors include context and make the non-success branches include the
relevant identifiers; for the GitHub dispatch (the Command::new("gh") call that
sets args including "Publish to crates.io", ref_name, version and dry_run) call
.status().context("dispatching GitHub workflow 'Publish to crates.io'")? and
change the bail! to include version and ref_name (e.g. bail!("publish-release
workflow dispatch failed for version={} ref={}", version, ref_name)); do the
same for the other subprocess status() (lines 233-245) by adding a Context
message naming the script/command and changing its bail! to include the script
path or command and version so CI logs contain the exact command and version
that failed.
In `@xtask/src/tasks/validate_workspace_exclusions.rs`:
- Around line 177-239: The current check_member_dependencies function uses a
regex in has_excluded_dependency_reference which misses valid Cargo.toml
dependency forms (table headers like [dependencies.foo], target-specific tables,
and renamed deps via package = "..."); replace the grep approach by parsing each
manifest with a TOML parser (e.g., toml::Value or cargo_toml) inside
check_member_dependencies and walk the dependency tables (dependencies,
dev-dependencies, build-dependencies, and all target.*.dependencies) for each
member crate: for each dependency entry (whether a string, inline table, or
table under a named header), determine the effective package name by checking
the key and, if the value is a table, the package field, then compare that
effective name against excluded (same HashSet) and record offenders; remove
has_excluded_dependency_reference or refactor it to accept parsed toml::Value
and perform the same structured checks.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 770277f6-33f3-4029-ab2f-a3d40949189f
📒 Files selected for processing (85)
.ci/scripts/README.md.ci/scripts/measure-ci-baseline.shCONTRIBUTING.mdCargo.tomldocs/CHANGELOG_WORKFLOW.mddocs/PUBLISHING.mddocs/RELEASE_PROCESS.mddocs/handoff/swarm-pack/commands/green-merge.mddocs/project/CI_COST_TRACKING.mddocs/project/CI_TEST_LANES.mddocs/project/PUBLISHING_ROADMAP.mddocs/project/RELEASE_CHECKLIST.mddocs/project/XTASK_MIGRATION.mdjustfilescripts/DEPRECATED_RELEASE_SCRIPTS.mdscripts/build-timing-receipt.shscripts/check-doc-claims.pyscripts/check-version-sync.shscripts/check_features_invariants.pyscripts/ci-audit-workflows.pyscripts/ci-cost-monitor.shscripts/cleanup-worktrees.shscripts/compare-build-timing.shscripts/dead-code-check.shscripts/debt-pr-summary.pyscripts/debt-report.pyscripts/devex-doctor.shscripts/devex-targeted-checks.shscripts/e2e-gate.shscripts/forbid-fatal-constructs.shscripts/generate-badges.shscripts/ignored-test-count.shscripts/install-githooks.shscripts/preflight.shscripts/prep-crates-io-launch.shscripts/prepare-release.shscripts/publish-receipts.shscripts/publish-release.shscripts/quick-receipts.shscripts/run_parser_comparison.shscripts/smoke-test-release.shscripts/swarm-summary.shscripts/test-capped.shscripts/test-e2e-capped.shscripts/test-lsp-cancellation.shscripts/test_edge_cases.shscripts/test_iterative_parser.shscripts/update-current-status.pyscripts/update-homebrew.shscripts/update-parser-matrix.pyscripts/validate-workspace-exclusions.shscripts/verify_stacker.shxtask/src/main.rsxtask/src/tasks/benchmarks.rsxtask/src/tasks/build_timing.rsxtask/src/tasks/check_toolchain.rsxtask/src/tasks/check_version_sync.rsxtask/src/tasks/ci_audit_workflows.rsxtask/src/tasks/ci_hygiene.rsxtask/src/tasks/ci_measure.rsxtask/src/tasks/ci_metrics.rsxtask/src/tasks/ci_policy.rsxtask/src/tasks/corpus_audit.rsxtask/src/tasks/debt_report.rsxtask/src/tasks/devex_doctor.rsxtask/src/tasks/doc_claims.rsxtask/src/tasks/features.rsxtask/src/tasks/forbid_fatal_constructs.rsxtask/src/tasks/forensics.rsxtask/src/tasks/github.rsxtask/src/tasks/hardening.rsxtask/src/tasks/hook_checks.rsxtask/src/tasks/inject_sha_assets.rsxtask/src/tasks/mod.rsxtask/src/tasks/parser_matrix.rsxtask/src/tasks/populate_book.rsxtask/src/tasks/prep_crates_io_launch.rsxtask/src/tasks/publication_facts.rsxtask/src/tasks/publish.rsxtask/src/tasks/publish_receipts.rsxtask/src/tasks/release_turnkey.rsxtask/src/tasks/swarm_summary.rsxtask/src/tasks/update_homebrew.rsxtask/src/tasks/validate_workspace_exclusions.rsxtask/src/tasks/worktrees.rs
💤 Files with no reviewable changes (1)
- Cargo.toml
| pr=<number> | ||
| gh pr merge "$pr" --squash \ | ||
| --subject "feat(scope): <imperative summary>" \ | ||
| --body "Merge PR #$pr | ||
|
|
||
| - <change 1> | ||
| - <change 2>" |
There was a problem hiding this comment.
Use a shell-safe placeholder for the PR number.
Line 26 is not copy-paste safe: pr=<number> is parsed as input redirection, not a placeholder assignment. Use a literal example or a variable name instead.
Suggested fix
-pr=<number>
+pr=1234 # replace with the PR number🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/handoff/swarm-pack/commands/green-merge.md` around lines 26 - 32, The
snippet uses an unsafe placeholder `pr=<number>` which is interpreted by the
shell as input redirection; replace it with a shell-safe placeholder or example
value and reference the variable in the merge command (e.g., set a quoted
variable like PR="<PR_NUMBER>" or PR=123 and then call `gh pr merge "$PR"
--squash ...`) so the `gh pr merge` invocation uses a valid variable (`PR` or
similar) instead of `pr=<number>`.
| ci-gate-with-receipts: | ||
| @echo "🚪 Running fast merge gate with receipts..." | ||
| @mkdir -p .receipts/$(date +%Y%m%d) | ||
| @RECEIPT_DIR=".receipts/$(date +%Y%m%d)" bash -c '\ | ||
| ./scripts/execute-gate.sh workflow-audit --receipt-dir "$RECEIPT_DIR" && \ | ||
| ./scripts/execute-gate.sh no-nested-lock --receipt-dir "$RECEIPT_DIR" && \ | ||
| ./scripts/execute-gate.sh format --receipt-dir "$RECEIPT_DIR" && \ | ||
| ./scripts/execute-gate.sh clippy-lib --receipt-dir "$RECEIPT_DIR" && \ | ||
| ./scripts/execute-gate.sh test-lib --receipt-dir "$RECEIPT_DIR" && \ | ||
| ./scripts/execute-gate.sh policy --receipt-dir "$RECEIPT_DIR" && \ | ||
| ./scripts/execute-gate.sh lsp-definition --receipt-dir "$RECEIPT_DIR" \ | ||
| ' | ||
| @for gate in workflow-audit no-nested-lock format clippy-lib test-lib policy lsp-definition; do \ | ||
| cargo xtask gates --gate "$${gate}" --receipt --receipt-path ".receipts/$(date +%Y%m%d)/$${gate}.json" || exit 1; \ | ||
| done | ||
| @echo "✅ Merge gate passed with receipts!" | ||
| @echo "📁 Receipts: .receipts/$(date +%Y%m%d)/" | ||
|
|
||
| # Gate execution for individual gate (with receipt) | ||
| gate-execute gate_id: | ||
| @./scripts/execute-gate.sh {{gate_id}} --receipt-dir .receipts/$(date +%Y%m%d) | ||
| @cargo xtask gates --gate "{{gate_id}}" --receipt --receipt-path ".receipts/$(date +%Y%m%d)/{{gate_id}}.json" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Date expansion may be inconsistent across loop iterations.
The $(date +%Y%m%d) is expanded at loop execution time in shell. If the loop spans midnight, receipt files could end up in different directories. Consider capturing the date once:
✨ Suggested fix
ci-gate-with-receipts:
`@echo` "🚪 Running fast merge gate with receipts..."
- `@mkdir` -p .receipts/$(date +%Y%m%d)
+ `@RECEIPT_DATE`=$(date +%Y%m%d); \
+ mkdir -p .receipts/$$RECEIPT_DATE; \
`@for` gate in workflow-audit no-nested-lock format clippy-lib test-lib policy lsp-definition; do \
- cargo xtask gates --gate "$${gate}" --receipt --receipt-path ".receipts/$(date +%Y%m%d)/$${gate}.json" || exit 1; \
+ cargo xtask gates --gate "$${gate}" --receipt --receipt-path ".receipts/$$RECEIPT_DATE/$${gate}.json" || exit 1; \
done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@justfile` around lines 1591 - 1602, Capture the date once into a shell
variable before using it in the loop to avoid inconsistent expansion across
iterations: in the ci-gate-with-receipts recipe set DATE="$(date +%Y%m%d)" and
then use "$${DATE}" for mkdir and in the cargo xtask receipt-path (and similarly
update the gate-execute gate_id recipe to use "$${DATE}" instead of calling date
inline). Reference the ci-gate-with-receipts and gate-execute (gate_id) recipes
and the loop variable gate when making this change.
| if __name__ == "__main__": | ||
| raise SystemExit(main()) | ||
| repo_root = Path(__file__).resolve().parents[1] | ||
| raise SystemExit( | ||
| subprocess.call(["cargo", "xtask", "features", "invariants"], cwd=repo_root) | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n scripts/check_features_invariants.pyRepository: EffortlessMetrics/perl-lsp
Length of output: 486
🏁 Script executed:
# Search for other scripts using subprocess in similar ways
fd -e py -type f | xargs rg -l "subprocess\.(call|run|Popen)" | head -20Repository: EffortlessMetrics/perl-lsp
Length of output: 587
🏁 Script executed:
# Check if there are other xtask shims or similar patterns
rg "xtask" --type py -B 2 -A 2Repository: EffortlessMetrics/perl-lsp
Length of output: 3597
🏁 Script executed:
# Look at git history or comments about this pattern
rg "subprocess.call|os.exec" --type py -B 2 -A 2Repository: EffortlessMetrics/perl-lsp
Length of output: 2052
🏁 Script executed:
python3 << 'EOF'
import subprocess
import os
import sys
# Document the behavior of subprocess.call with signals
print("=== subprocess.call Signal Handling ===")
print("subprocess.call() returns:")
print("- Positive int: normal exit code")
print("- Negative int: exit code is -signal_number (if killed by signal)")
print("")
# Check if os.execvp would be available
print("=== os.execv availability ===")
print(f"os.execv available: {hasattr(os, 'execv')}")
print(f"os.execvp available: {hasattr(os, 'execvp')}")
print("")
# Show difference between approaches
print("=== Comparison ===")
print("subprocess.call approach:")
print("- Creates child process, waits for it")
print("- Exit codes and signals converted to integer")
print("- Parent process survives child termination")
print("")
print("os.execv approach:")
print("- Replaces current process with new executable")
print("- Same PID for the executable")
print("- Signals directly affect replaced process")
print("- More efficient for shims (no extra process)")
EOFRepository: EffortlessMetrics/perl-lsp
Length of output: 677
Replace subprocess.call delegation with os.execvp in all shim scripts.
This pattern appears in 7 scripts (check_features_invariants.py, update-parser-matrix.py, update-current-status.py, debt-report.py, debt-pr-summary.py, ci-audit-workflows.py, check-doc-claims.py). Shim scripts should use os.execvp() instead of subprocess.call() wrapped in SystemExit() to avoid creating an unnecessary intermediate process and to properly handle signals that target the shim. Replace with os.execvp("cargo", ["cargo", "xtask", "features", "invariants"]) after setting the working directory.
🧰 Tools
🪛 Ruff (0.15.6)
[error] 12-12: Starting a process with a partial executable path
(S607)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check_features_invariants.py` around lines 9 - 13, The current shim
uses raise SystemExit(subprocess.call(...)) which spawns an extra process and
mishandles signals; replace this pattern by changing the working directory to
repo_root (use os.chdir(repo_root)) and then exec the cargo binary with
os.execvp so the shim is replaced by cargo (call os.execvp("cargo", ["cargo",
"xtask", "features", "invariants"])). Update the block that defines repo_root
and the final raise SystemExit(...) to perform os.chdir(repo_root) followed by
os.execvp, removing subprocess.call and SystemExit usage.
| #!/usr/bin/env bash | ||
| # Clean up stale agent worktrees from previous sessions | ||
| # Safe: only removes worktrees under .claude/worktrees/ that are not the current session | ||
|
|
||
| set -euo pipefail | ||
|
|
||
| echo "=== Worktree Cleanup ===" | ||
|
|
||
| # Prune references to deleted worktrees | ||
| git worktree prune | ||
|
|
||
| # Count current worktrees (excluding main and /tmp) | ||
| STALE=$(git worktree list | grep -c '.claude/worktrees/' || echo 0) | ||
| echo "Found $STALE agent worktrees" | ||
|
|
||
| if [[ "$STALE" -eq 0 ]]; then | ||
| echo "No stale worktrees to clean up" | ||
| exit 0 | ||
| fi | ||
|
|
||
| # Remove each worktree | ||
| git worktree list | grep '.claude/worktrees/' | awk '{print $1}' | while read -r wt; do | ||
| echo "Removing: $wt" | ||
| git worktree remove --force "$wt" 2>/dev/null || rm -rf "$wt" | ||
| done | ||
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | ||
| REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" | ||
| cd "$REPO_ROOT" | ||
|
|
||
| # Final prune | ||
| git worktree prune | ||
| echo "Cleanup complete" | ||
| exec cargo xtask worktree-cleanup |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
LGTM - proper repo-root resolution pattern.
This shim correctly resolves SCRIPT_DIR/REPO_ROOT and cds before invoking xtask. Consider adding "$@" to forward any future CLI arguments for consistency with other shims:
-exec cargo xtask worktree-cleanup
+exec cargo xtask worktree-cleanup "$@"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #!/usr/bin/env bash | |
| # Clean up stale agent worktrees from previous sessions | |
| # Safe: only removes worktrees under .claude/worktrees/ that are not the current session | |
| set -euo pipefail | |
| echo "=== Worktree Cleanup ===" | |
| # Prune references to deleted worktrees | |
| git worktree prune | |
| # Count current worktrees (excluding main and /tmp) | |
| STALE=$(git worktree list | grep -c '.claude/worktrees/' || echo 0) | |
| echo "Found $STALE agent worktrees" | |
| if [[ "$STALE" -eq 0 ]]; then | |
| echo "No stale worktrees to clean up" | |
| exit 0 | |
| fi | |
| # Remove each worktree | |
| git worktree list | grep '.claude/worktrees/' | awk '{print $1}' | while read -r wt; do | |
| echo "Removing: $wt" | |
| git worktree remove --force "$wt" 2>/dev/null || rm -rf "$wt" | |
| done | |
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | |
| REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" | |
| cd "$REPO_ROOT" | |
| # Final prune | |
| git worktree prune | |
| echo "Cleanup complete" | |
| exec cargo xtask worktree-cleanup | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | |
| REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" | |
| cd "$REPO_ROOT" | |
| exec cargo xtask worktree-cleanup "$@" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/cleanup-worktrees.sh` around lines 1 - 8, The script correctly
resolves SCRIPT_DIR and REPO_ROOT and cds into the repo, but it doesn't forward
CLI args; update the exec invocation (the final line invoking cargo xtask
worktree-cleanup) to append "$@" so any arguments passed to the shim are
forwarded to cargo xtask; keep SCRIPT_DIR and REPO_ROOT resolution intact.
| if [ "${DEAD_CODE_STRICT:-false}" = "true" ] && [[ "$*" != *"--strict"* ]]; then | ||
| set -- "$@" --strict | ||
| fi |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor: String pattern matching for --strict may have edge cases.
The check [[ "$*" != *"--strict"* ]] performs substring matching on the concatenated argument string. This could theoretically produce false positives if an argument contained "strict" as a substring (e.g., --non-strict-mode). A more robust approach would be to iterate over "$@":
📝 Optional improvement
-if [ "${DEAD_CODE_STRICT:-false}" = "true" ] && [[ "$*" != *"--strict"* ]]; then
+has_strict=false
+for arg in "$@"; do
+ [[ "$arg" == "--strict" ]] && has_strict=true
+done
+if [ "${DEAD_CODE_STRICT:-false}" = "true" ] && [ "$has_strict" = "false" ]; then
set -- "$@" --strict
fiThat said, the current implementation is pragmatic for this use case where --strict is unlikely to appear as a substring in other valid arguments.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ "${DEAD_CODE_STRICT:-false}" = "true" ] && [[ "$*" != *"--strict"* ]]; then | |
| set -- "$@" --strict | |
| fi | |
| has_strict=false | |
| for arg in "$@"; do | |
| [[ "$arg" == "--strict" ]] && has_strict=true | |
| done | |
| if [ "${DEAD_CODE_STRICT:-false}" = "true" ] && [ "$has_strict" = "false" ]; then | |
| set -- "$@" --strict | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/dead-code-check.sh` around lines 11 - 13, The substring check using
[[ "$*" != *"--strict"* ]] can misfire if any argument contains "strict" as part
of it; instead iterate over the positional parameters to check for the exact
flag and only append --strict if none match. Update the block that uses
DEAD_CODE_STRICT and the check for "--strict" (references: DEAD_CODE_STRICT,
--strict, "$*" and "$@") to loop through "$@" and compare each arg exactly
(e.g., eq to "--strict") before calling set -- "$@" --strict.
| let manifest_path = Path::new(&package.manifest_path); | ||
| let crate_dir = match manifest_path.parent() { | ||
| Some(dir) => dir, | ||
| None => continue, | ||
| }; | ||
| let rel_dir = crate_dir.strip_prefix(workspace_root).unwrap_or(crate_dir); | ||
|
|
||
| args.push(format!("--config=patch.crates-io.{}.path={}", package.name, rel_dir.display())); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Cargo's --config KEY=VALUECLI syntax, must string path values such aspatch.crates-io.foo.path be quoted as TOML strings?
💡 Result:
Yes, string path values such as patch.crates-io.foo.path must be quoted as TOML strings in Cargo's --config KEY=VALUE CLI syntax.
Citations:
- 1: https://doc.rust-lang.org/cargo/reference/config.html
- 2: https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html
- 3: https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html?highlight=patch
- 4: https://gatowololo.github.io/blog/cargo-patch/
🏁 Script executed:
# First, let's examine the actual code at lines 103-110
cat -n xtask/src/tasks/prep_crates_io_launch.rs | sed -n '103,110p'Repository: EffortlessMetrics/perl-lsp
Length of output: 498
🏁 Script executed:
# Check the full function context to understand what this code does
cat -n xtask/src/tasks/prep_crates_io_launch.rs | sed -n '95,120p'Repository: EffortlessMetrics/perl-lsp
Length of output: 1070
🏁 Script executed:
# Check if toml crate is available in dependencies
find . -name "Cargo.toml" -type f | head -5 | xargs grep -l "toml" 2>/dev/nullRepository: EffortlessMetrics/perl-lsp
Length of output: 104
🏁 Script executed:
# Search for other uses of --config in the codebase to see patterns
rg "\-\-config" --type rs -A 2 -B 2Repository: EffortlessMetrics/perl-lsp
Length of output: 96
🏁 Script executed:
# Let's check the xtask Cargo.toml to see what dependencies are available
cat xtask/Cargo.tomlRepository: EffortlessMetrics/perl-lsp
Length of output: 1525
🏁 Script executed:
# Search for other --config usages (fixing the command)
rg "\-\-config" -A 2 -B 2Repository: EffortlessMetrics/perl-lsp
Length of output: 9070
🏁 Script executed:
# Let's also verify Cargo documentation behavior with a simpler test - check if there are any tests or examples
rg "patch.crates-io" -A 2 -B 2Repository: EffortlessMetrics/perl-lsp
Length of output: 1226
🏁 Script executed:
# Check what imports are at the top of the file
head -30 xtask/src/tasks/prep_crates_io_launch.rsRepository: EffortlessMetrics/perl-lsp
Length of output: 732
Quote the generated --config path value.
Line 110 generates --config=patch.crates-io.{}.path=... without quoting the path, but Cargo's --config KEY=VALUE parser requires TOML-quoted string values for patch.crates-io.*.path. Add quotes around the path value to fix this. The codebase already demonstrates the correct pattern in crates/perl-ci-hygiene/src/main.rs and scripts/cargo-package-workspace-dry-run.sh.
🔧 Suggested fix
- args.push(format!("--config=patch.crates-io.{}.path={}", package.name, rel_dir.display()));
+ args.push(format!("--config=patch.crates-io.{}.path=\"{}\"", package.name, rel_dir.display()));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/prep_crates_io_launch.rs` around lines 103 - 110, The
generated --config value needs the path quoted for Cargo's KEY=VALUE parser:
update the args.push call that builds the config string (the format! invoking
package.name and rel_dir.display()) to wrap the path portion in TOML-style
quotes (e.g. ...path="...") so the path is passed as a quoted string; keep using
package.name and rel_dir.display() but ensure the path is enclosed in quotes in
the formatted value.
| sha = command_output_or_unknown(&root, &["git", "rev-parse", "HEAD"], "UNVERIFIED"), | ||
| rustc = command_output_or_unknown(&root, &["rustc", "--version"], "UNVERIFIED"), | ||
| cargo = command_output_or_unknown(&root, &["cargo", "--version"], "UNVERIFIED"), | ||
| host = command_output_or_unknown(&root, &["uname", "-a"], "UNVERIFIED"), | ||
| copied = copied, | ||
| ); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
uname -a will fail on Windows.
The uname command is Unix-specific and will fail or produce "UNVERIFIED" on Windows. If cross-platform support is intended, consider using std::env::consts::OS and std::env::consts::ARCH as a fallback, or document this as a Unix-only tool.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/publish_receipts.rs` around lines 81 - 86, The call that sets
host currently uses command_output_or_unknown(&root, &["uname", "-a"],
"UNVERIFIED") which will fail on Windows; change the host assignment in
publish_receipts.rs to prefer the uname output when available but fall back to a
cross-platform value using std::env::consts::OS and std::env::consts::ARCH (or
combine them) when uname fails or returns "UNVERIFIED", i.e., update the host
expression that sets the host variable so it attempts
command_output_or_unknown(...) and if that equals "UNVERIFIED" uses format!("{}
{}", std::env::consts::OS, std::env::consts::ARCH) instead (or document
Unix-only behavior).
| let workspace_dependencies = manifest | ||
| .get("workspace") | ||
| .and_then(|workspace| workspace.get("dependencies")) | ||
| .and_then(Value::as_table); | ||
|
|
||
| if let Some(workspace_dependencies) = workspace_dependencies { | ||
| let excluded = excluded_set(); | ||
| let offending = workspace_dependencies | ||
| .iter() | ||
| .filter_map(|(name, value)| { | ||
| if !excluded.contains(name.as_str()) { | ||
| return None; | ||
| } | ||
|
|
||
| if name.as_str() == "tree-sitter-perl" { | ||
| if value.as_table().and_then(|value| value.get("path")).and_then(Value::as_str) | ||
| == Some("crates/tree-sitter-perl-rs") | ||
| { | ||
| return None; | ||
| } | ||
| } | ||
|
|
||
| Some(name.as_str()) | ||
| }) | ||
| .collect::<Vec<_>>(); |
There was a problem hiding this comment.
Inspect renamed packages in [workspace].dependencies.
This only rejects entries whose table key is an excluded crate name. Cargo also allows aliases like parser = { package = "tree-sitter-perl-c", ... } and [workspace.dependencies.parser], which currently pass even though they still pull in an excluded crate. Check both the dependency key and any nested package field before declaring workspace.dependencies clean.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/validate_workspace_exclusions.rs` around lines 103 - 127, The
filter in the workspace_dependencies iterator only checks the dependency key
against excluded_set() and misses aliases where value is a table with package =
"excluded-name"; update the closure used to build offending (in the block using
workspace_dependencies, excluded_set(), and offending) to also inspect
value.as_table().and_then(|t| t.get("package")).and_then(Value::as_str) and
treat the dependency as offending if either the key name or that nested package
string is in excluded_set(); preserve the existing special-case for
"tree-sitter-perl" that allows the specific path "crates/tree-sitter-perl-rs" by
checking the package string/path before returning Some(name.as_str()).
| #[test] | ||
| fn workspace_exclude_values_reads_workspace_table() -> Result<()> { | ||
| let manifest: Value = toml::from_str( | ||
| r#" | ||
| exclude = ["wrong-root-level-entry"] | ||
|
|
||
| [workspace] | ||
| exclude = ["tree-sitter-perl", "crates/tree-sitter-perl-c", "fuzz"] | ||
| "#, | ||
| )?; | ||
|
|
||
| let exclude = workspace_exclude_values(&manifest)?; | ||
|
|
||
| assert_eq!(exclude, vec!["tree-sitter-perl", "crates/tree-sitter-perl-c", "fuzz"]); | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[test] | ||
| fn missing_required_excluded_paths_ignores_optional_entries() { | ||
| let exclude_values = vec!["tree-sitter-perl", "crates/tree-sitter-perl-c", "fuzz"]; | ||
|
|
||
| let missing = missing_required_excluded_directories(&exclude_values); | ||
|
|
||
| assert!(missing.is_empty()); | ||
| } | ||
|
|
||
| #[test] | ||
| fn missing_required_excluded_paths_flags_missing_required_path() { | ||
| let exclude_values = vec!["tree-sitter-perl", "crates/tree-sitter-perl-c"]; | ||
|
|
||
| let missing = missing_required_excluded_directories(&exclude_values); | ||
|
|
||
| assert_eq!(missing, vec!["fuzz".to_string()]); | ||
| } | ||
|
|
||
| #[test] | ||
| fn manifest_is_in_excluded_directory_uses_manifest_parent() -> Result<()> { | ||
| let manifest = Path::new("/repo/crates/tree-sitter-perl-c/Cargo.toml"); | ||
| let manifest_dir = | ||
| manifest.parent().ok_or_else(|| eyre!("Expected manifest path to have a parent"))?; | ||
|
|
||
| assert!(manifest_dir.ends_with("crates/tree-sitter-perl-c")); | ||
| assert!(!manifest.ends_with("crates/tree-sitter-perl-c")); | ||
| assert!(manifest_is_in_excluded_directory("/repo/crates/tree-sitter-perl-c/Cargo.toml")); | ||
| Ok(()) | ||
| } | ||
|
|
||
| #[test] | ||
| fn excluded_workspace_member_names_flags_members_by_manifest_directory() { | ||
| let excluded_id = | ||
| String::from("tree-sitter-perl-c 0.1.0 (path+file:///repo/crates/tree-sitter-perl-c)"); | ||
| let metadata = Metadata { | ||
| workspace_members: vec![excluded_id.clone()], | ||
| packages: vec![ | ||
| MetadataPackage { | ||
| id: excluded_id, | ||
| name: String::from("tree-sitter-perl-c"), | ||
| manifest_path: String::from("/repo/crates/tree-sitter-perl-c/Cargo.toml"), | ||
| }, | ||
| MetadataPackage { | ||
| id: String::from("other 0.1.0 (path+file:///repo/crates/other)"), | ||
| name: String::from("other"), | ||
| manifest_path: String::from("/repo/crates/other/Cargo.toml"), | ||
| }, | ||
| ], | ||
| }; | ||
|
|
||
| let offending = excluded_workspace_member_names(&metadata); | ||
|
|
||
| assert_eq!(offending, vec!["tree-sitter-perl-c".to_string()]); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use the repository's Result<()> test style for these cases.
These new tests still rely on panicking assertions, and several of them return (). Please convert them to Result<()>-returning tests and the preferred helpers where they fit.
As per coding guidelines, "In tests, prefer Result<()> and helpers such as perl_tdd_support::must and must_some instead of panicking assertions".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/validate_workspace_exclusions.rs` around lines 307 - 377,
Convert the panic-based tests to the repository test style by changing their
signatures to return Result<()> and using the perl_tdd_support helpers instead
of assert!/assert_eq!: for
missing_required_excluded_paths_ignores_optional_entries and
missing_required_excluded_paths_flags_missing_required_path change fn ...() to
fn ...() -> Result<()> and replace assert!(missing.is_empty()) /
assert_eq!(missing, vec![...]) with the appropriate must()/must_eq()/must_some
helpers (or wrap the result in perl_tdd_support::must to assert non-panics and
compare values via must/equals), and for
excluded_workspace_member_names_flags_members_by_manifest_directory change it to
return Result<()> and use must to assert offending equals the expected vec;
finish each test with Ok(()) so they follow the Result<()> test pattern.
| let list_output = Command::new("git") | ||
| .current_dir(&root) | ||
| .args(["worktree", "list"]) | ||
| .stdout(Stdio::piped()) | ||
| .output()?; | ||
| if !list_output.status.success() { | ||
| bail!("failed to list git worktrees"); | ||
| } | ||
|
|
||
| let list = str::from_utf8(&list_output.stdout)?; | ||
| let stale = list | ||
| .lines() | ||
| .filter_map(|line| { | ||
| if line.contains(".claude/worktrees/") { line.split_whitespace().next() } else { None } | ||
| }) | ||
| .collect::<Vec<_>>(); |
There was a problem hiding this comment.
Parse git worktree list as machine-readable output before deleting anything.
split_whitespace().next() truncates paths containing spaces. With the fallback remove_dir_all(worktree), a worktree like /Users/Alice Smith/repo/.claude/worktrees/foo becomes /Users/Alice, which can delete the wrong directory. Use git worktree list --porcelain/-z and parse worktree <path> records instead of the human-readable format.
Also applies to: 45-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@xtask/src/tasks/worktrees.rs` around lines 18 - 33, The current parsing of
git worktree output uses split_whitespace().next() on the human-readable output
(see Command::new("git") producing list_output and the subsequent list -> stale
collection) which truncates paths containing spaces and can delete the wrong
directory; change the invocation to use machine-readable output (e.g., git
worktree list --porcelain or git worktree list -z) and parse "worktree <path>"
records (or null-delimited entries) to extract the full path strings safely,
replacing the split_whitespace().next() logic used when building stale and the
similar parsing at the deletion site so deletions operate on the complete,
untruncated paths (use the "worktree" token to identify path records).
Why this change
xtaskcommands.xtaskthe canonical entrypoint.What changed
xtasktasks.xtask/src/main.rsand modules underxtask/src/tasks/.xtask:xtask/src/tasks/inject_sha_assets.rs: fixed JSON map construction.xtask/src/tasks/inject_sha_assets.rsandxtask/src/tasks/update_homebrew.rs: switched raw-string delimiters to prevent Ruby interpolation from terminating Rust string literals.xtask/src/tasks/devex_doctor.rs: removed temporary-borrow from UTF-8 parsing.xtask/src/tasks/doc_claims.rs: fixed move semantics in markdown scan loop.xtask/src/tasks/hook_checks.rs: fixed move semantics when checking registered hook scripts.xtask/src/tasks/validate_workspace_exclusions.rs: removed an unused binding.Validation status