Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds optional Task.definition_path and composed-definition metadata; implements cycle-safe recursive discovery for Makefile and Taskfile includes and recursive Turbo extends resolution; centralizes allowlist evaluation to use definition paths; updates parsers, listing/prompt/DTO/server flows, and tests to preserve source attribution. ChangesComposed Task Definition & Recursive Discovery
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI / list / prompt
participant Discovery as task_discovery
participant Parser as parse_makefile / parse_taskfile / parse_turbo_json
participant State as RecursiveDiscoveryState
participant Composed as ComposedDefinitionSource
CLI->>Discovery: request project discovery/list
Discovery->>Composed: build root composed source
Discovery->>State: init traversal state
Discovery->>Parser: parse root file/config
Parser-->>Discovery: tasks + include directives
Discovery->>State: mark_visited(definition_path)
loop for each include
Discovery->>Composed: resolve_child(include_path)
Composed-->>Discovery: child composed source
Discovery->>State: mark_visited(child.definition_path)
alt New
Discovery->>Parser: parse included file
Parser-->>Discovery: child tasks + includes
Discovery->>Discovery: recurse
else AlreadyVisited
State-->>Discovery: skip (cycle)
end
end
Discovery-->>CLI: aggregated tasks with definition_path metadata
CLI->>CLI: render list using task.definition_path()
sequenceDiagram
participant CLI as allow-command / MCP
participant AllowEval as evaluate_task_against_allowlist
participant Allowlist as Allowlist store
participant Task as Task
CLI->>AllowEval: evaluate permission for Task
AllowEval->>Task: call task.allowlist_path()
Task-->>AllowEval: definition_path or file_path
AllowEval->>Allowlist: check deny entries (deny-first)
alt Denied
AllowEval-->>CLI: Denied
else
AllowEval->>Allowlist: check allow entries (scoped)
alt Allowed
AllowEval-->>CLI: Allowed
else
AllowEval-->>CLI: NotFound
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Microsoft Presidio Analyzer (2.2.362)src/parsers/parse_turbo_json.rsMicrosoft Presidio Analyzer failed to scan this file src/task_discovery.rsMicrosoft Presidio Analyzer failed to scan this file Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/types.rs (1)
214-214:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
TaskRunner::Actmust usetask.definition_path()instead oftask.file_path
act's-Wflag specifies the directory (or file) containing the workflow. With the semantic model introduced by this PR,file_pathis now the runner execution context (e.g., a root wrapper file), whiledefinition_path()returns the actual workflow file path, falling back tofile_pathwhen not set.In a composed GitHub Actions scenario where
definition_pathisSome(...),act -W <file_path>will point at the wrong file and fail silently. The fallback indefinition_path()means changing to it is fully backward-compatible for non-composed tasks.🐛 Proposed fix
- TaskRunner::Act => format!("act -W {}", task.file_path.display()), + TaskRunner::Act => format!("act -W {}", task.definition_path().display()),🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/types.rs` at line 214, Replace the use of task.file_path when building the act command for TaskRunner::Act with task.definition_path() (which falls back to file_path when None) so the -W flag points to the actual workflow file; update the TaskRunner::Act arm that currently does format!("act -W {}", task.file_path.display()) to call task.definition_path().display() instead to handle composed workflows correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/composed_paths.rs`:
- Around line 27-35: The function signature and body formatting for pub fn
composed(...) in composed (the composed constructor using normalize_path for
runner_path and definition_path) doesn't match rustfmt; run cargo fmt to
reformat the file (or manually apply rustfmt-style changes) so the pub fn
composed(runner_path: impl Into<PathBuf>, definition_path: impl Into<PathBuf>)
-> Self { ... } block conforms to rustfmt formatting conventions and CI will
pass.
- Around line 221-232: The test formatting for
test_composed_source_resolves_children_relative_to_definition_file doesn't match
rustfmt: reformat the ComposedDefinitionSource::composed(...) call and
surrounding lines (in that test) to match rustfmt style (wrap/indent arguments
and method calls consistently) and run cargo fmt to update the file; ensure the
ComposedDefinitionSource::composed invocation and the assert_eq! block (and
source.resolve_child call) follow rustfmt output so CI passes.
In `@src/parsers/parse_makefile.rs`:
- Around line 128-134: The condition block using
remainder.chars().next().is_some_and(char::is_whitespace) is misformatted;
reformat that if-statement to match rustfmt expectations (e.g., adjust
indentation/line breaks so the if
!remainder.chars().next().is_some_and(char::is_whitespace) { continue; }
expression is formatted correctly) and then run cargo fmt to apply canonical
formatting across src/parsers/parse_makefile.rs so CI formatting checks pass.
- Around line 141-142: The formatting of the statement creating path_tokens with
shell_words::split(...) is not rustfmt-compliant; run cargo fmt (or rustfmt) to
reflow/wrap the line so it matches project formatting standards—specifically
reformat the expression using
shell_words::split(comment_stripped).unwrap_or_else(|_|
split_include_fallback(comment_stripped)) around the let path_tokens binding to
satisfy CI.
- Around line 222-227: The function split_include_fallback has formatting that
fails CI; run rustfmt/cargo fmt and reformat the function body so the chained
iterator calls are collapsed onto fewer lines (e.g., keep the function signature
on one line and the chained calls compacted) — locate the split_include_fallback
function and apply rustfmt to make the
.split_whitespace().map(str::to_string).collect() chain formatted onto fewer
lines to match project style.
- Around line 5-6: The imports are split for PathBuf and Path which fails
rustfmt ordering; replace the two separate declarations with a single grouped
import like use std::path::{Path, PathBuf}; so combine the symbols PathBuf and
Path into one use std::path::{...} statement (locate the existing use
std::path::PathBuf and use std::path::Path declarations in parse_makefile.rs and
merge them).
In `@src/task_discovery.rs`:
- Around line 337-340: The formatting of the push/format statement in the task
discovery code is off and fails cargo fmt; locate the block where
discovered.errors.push(format!("Failed to parse {}: {}",
makefile_path.display(), error)); is used alongside TaskFileStatus::ParseError
and reformat that expression (or run cargo fmt) so it conforms to rustfmt style
(e.g., break arguments across lines or adjust spacing) and then run cargo fmt to
ensure the file compiles cleanly.
In `@tests/docker_noinit/test_noinit.sh`:
- Around line 405-443: Test 22c leaves the temporary make_include_project test
directory behind; after the assertions add a cleanup step to delete the created
project and artifacts to avoid polluting subsequent tests. Specifically, at the
end of the Test 22c block in tests/docker_noinit/test_noinit.sh (the Test 22c /
make_include_project section) remove the created make_include_project directory
and any generated files like make_include_list.txt (e.g., rm -rf
make_include_project and rm -f make_include_list.txt) so the test environment is
restored for later tests.
---
Outside diff comments:
In `@src/types.rs`:
- Line 214: Replace the use of task.file_path when building the act command for
TaskRunner::Act with task.definition_path() (which falls back to file_path when
None) so the -W flag points to the actual workflow file; update the
TaskRunner::Act arm that currently does format!("act -W {}",
task.file_path.display()) to call task.definition_path().display() instead to
handle composed workflows correctly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d138ec0e-9732-4f7f-8033-d0b7711e4a9d
📒 Files selected for processing (25)
dev_docs/project_plan.mdsrc/allowlist.rssrc/commands/list.rssrc/composed_paths.rssrc/lib.rssrc/main.rssrc/mcp/allowlist.rssrc/mcp/dto.rssrc/mcp/server.rssrc/parsers/parse_cmake.rssrc/parsers/parse_docker_compose.rssrc/parsers/parse_github_actions.rssrc/parsers/parse_gradle.rssrc/parsers/parse_justfile.rssrc/parsers/parse_makefile.rssrc/parsers/parse_package_json.rssrc/parsers/parse_pom_xml.rssrc/parsers/parse_pyproject_toml.rssrc/parsers/parse_taskfile.rssrc/parsers/parse_travis_ci.rssrc/parsers/parse_turbo_json.rssrc/prompt.rssrc/task_discovery.rssrc/types.rstests/docker_noinit/test_noinit.sh
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/composed_paths.rs`:
- Line 9: Remove the stale "DTKT-198/200/201 will consume…" comments and the
misleading #[allow(dead_code)] annotations that reference those completed
tickets; specifically, locate the #[allow(dead_code)] attributes that precede
ComposedDefinitionSource and RecursiveDiscoveryState (and the other similar
occurrences in the same file) and either delete the comment text or remove the
#[allow(dead_code)] completely if the types are actively used, leaving no
misleading ticket references in comments.
In `@src/parsers/parse_turbo_json.rs`:
- Around line 24-31: TurboConfig currently only exposes task_names() (which
filters by is_effective_task_definition()) so tasks declared with "extends:
false" are omitted and inherited parents aren't removed; add a new method
non_inheriting_task_names(&self) -> impl Iterator<Item=&String> that iterates
self.tasks and yields names where the task explicitly sets extends: false (the
inverse/explicit-suppression case), and then update
resolve_effective_turbo_tasks in task_discovery.rs to call that method after the
or_insert inheritance step and remove those names from the inherited tasks map
(e.g., for name in config.non_inheriting_task_names() { tasks.remove(name); })
so workspaces can opt out of inherited tasks.
- Around line 85-100: The non-object fallback in parse_task_config wrongly sets
has_local_configuration: true for values like null; change the fallback to
return inherits: true and has_local_configuration: false so non-object task
definitions are treated as "no local config, only inherit". Update the return in
parse_task_config's else branch to reflect has_local_configuration = false and
keep the existing logic for object cases (inspect "extends" and object.keys()).
In `@src/task_discovery.rs`:
- Around line 538-549: The function collect_taskfile_tasks_recursive has too
many parameters and triggers clippy::too-many-arguments; refactor by introducing
a single traversal/context struct (e.g., CollectContext or
RecursiveDiscoveryContext) that bundles root_taskfile_path, current_source,
namespace_prefix, include_label, hide_tasks, excluded_tasks, traversal_state
(RecursiveDiscoveryState), seen_task_names, collected_tasks, and include_errors,
then change collect_taskfile_tasks_recursive signature to take &mut
CollectContext (and any small immutable args that truly belong separately),
update all recursive calls to pass the same &mut context or clone only small
pieces as needed, move any helper logic that manipulates the bundled fields into
methods on the context to keep recursion readable, and run clippy/tests to
ensure the warning is resolved and behavior unchanged.
- Around line 595-601: The loop that iterates `for include in includes`
currently skips missing includes unconditionally; change it to honor the
include's optional flag: after computing `resolved_candidate` and
`resolved_include`, if `!resolved_include.is_file()` then if `include.optional`
continue, else return or propagate a parsing error (e.g.,
Err(ParseError::MissingInclude(...))) so required includes are not silently
dropped; update the code paths that call this loop to handle the new error
return (or mark the Taskfile as not Parsed) and use
`parse_taskfile::resolve_taskfile_include_path`, `resolved_candidate`, and
`resolved_include` to construct a clear error message.
- Around line 381-386: The loop currently ignores MakefileInclude.optional and
treats missing includes as optional; update the branch that checks
resolved_include.is_file() to respect the include.optional flag from the parsed
MakefileInclude: if resolved_include.is_file() is false and include.optional is
false, return an error or mark the overall parse as failed (instead of
continuing) so required includes cause parse failure; if include.optional is
true, continue as before. Locate the loop over includes (variable includes), the
MakefileInclude struct/field optional, and the call
current_source.resolve_child(&include.path)/resolved_include.is_file() to
implement this change.
In `@tests/docker_noinit/test_noinit.sh`:
- Around line 149-155: The failure handling around the external command dela
list should be moved into the conditional so set -e won't abort before your
custom stderr handling; change the current pattern that runs dela list then
checks $? to an if ! dela list > task_include_list.txt 2>
task_include_stderr.txt; then ... fi form, and apply the same change to the
other occurrence that writes task_include_list.txt/task_include_stderr.txt (the
second block referenced in the comment). Keep the red error echo, cat
task_include_stderr.txt and exit 1 inside the then block so the stderr dump runs
on failure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8dc00a29-820a-4019-942f-93a4b82b4da7
📒 Files selected for processing (12)
dev_docs/project_plan.mdsrc/commands/allow_command.rssrc/commands/list.rssrc/composed_paths.rssrc/mcp/dto.rssrc/parsers/parse_makefile.rssrc/parsers/parse_taskfile.rssrc/parsers/parse_turbo_json.rssrc/task_discovery.rstests/docker_noinit/test_noinit.shtests/task_definitions/turbo_repo/apps/web/package.jsontests/task_definitions/turbo_repo/apps/web/turbo.json
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/task_discovery.rs`:
- Around line 922-940: The loop in
collect_descendant_turbo_config_paths_recursive currently uses
entry.path().is_dir() which follows symlinks and can cause infinite recursion;
change it to inspect the DirEntry's file type (call entry.file_type()) and skip
entries where file_type.is_symlink() or where file_type indicates not a
directory, handling any file_type() errors by continuing, then proceed to use
path and the existing turbo.json checks and recursive call only for real
directories (non-symlink directories).
- Around line 369-390: The current code collects all tasks from the file then
skips duplicates with seen_task_names (first-win), which breaks Make semantics
where later includes override earlier definitions; update collection to be
last-win: when iterating tasks (from parse_makefile::parse and when returning
from collect_makefile_tasks_recursive), if a task.name already exists in the
collection (collected_tasks / seen_task_names), replace the existing task entry
(and its definition_path/metadata) with the new one instead of skipping it.
Concretely, change the seen_task_names.insert(...) check and push-only behavior
to use a map (e.g., HashMap<String, Task>) or search collected_tasks for
task.name and overwrite that entry; keep use of current_source.apply_to_task,
ComposedDefinitionSource::composed, resolve_child, and
extract_include_directives as-is so metadata stays correct.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b3187a8e-e096-4ee8-b854-16784b1f2e11
📒 Files selected for processing (2)
src/composed_paths.rssrc/task_discovery.rs
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
tests/docker_noinit/test_noinit.sh (1)
451-464:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winWrap
dela listinvocations in theif !condition (same pattern as Tests 6b and 22c).Lines 451 and 477 still use the bare
dela list > file 2> stderrpattern withoutif !. Because the script runs withset -e, a non-zero exit fromdela listaborts the script before the subsequentif [ -s ..._stderr.txt ]blocks (lines 460-464, 486-490) can dump the captured stderr. Use the sameif !style already adopted in Tests 6b (line 150) and 22c (line 561) so the diagnostic output is actually printed on failure.🔧 Suggested fix
-dela list > turbo_root_list_output.txt 2> turbo_root_stderr.txt -if grep -q "web-lint" turbo_root_list_output.txt && grep -q "apps/web/turbo.json" turbo_root_list_output.txt; then +if ! dela list > turbo_root_list_output.txt 2> turbo_root_stderr.txt; then + echo "${RED}✗ dela list failed from the turbo repo root${NC}" + cat turbo_root_stderr.txt + exit 1 +fi +if grep -q "web-lint" turbo_root_list_output.txt && grep -q "apps/web/turbo.json" turbo_root_list_output.txt; then-dela list > turbo_package_list_output.txt 2> turbo_package_stderr.txt -if grep -q "build-t" turbo_package_list_output.txt && grep -q "web-lint-t" turbo_package_list_output.txt && ! grep -q "test-t" turbo_package_list_output.txt; then +if ! dela list > turbo_package_list_output.txt 2> turbo_package_stderr.txt; then + echo "${RED}✗ dela list failed from the nested turbo package${NC}" + cat turbo_package_stderr.txt + exit 1 +fi +if grep -q "build-t" turbo_package_list_output.txt && grep -q "web-lint-t" turbo_package_list_output.txt && ! grep -q "test-t" turbo_package_list_output.txt; thenAlso applies to: 477-490
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/docker_noinit/test_noinit.sh` around lines 451 - 464, Replace the bare `dela list > turbo_root_list_output.txt 2> turbo_root_stderr.txt` (and the analogous invocation in the later block) with the same `if ! ...; then` pattern used in Tests 6b and 22c so that a non-zero exit does not abort the script before diagnostics are printed; specifically, wrap the `dela list` invocation that writes to `turbo_root_list_output.txt` and `turbo_root_stderr.txt` (and the second block) in `if ! dela list > ... 2> ...; then` and inside the then branch print/cat `turbo_root_stderr.txt` and exit with failure so the stderr is preserved for debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/parsers/parse_turbo_json.rs`:
- Around line 52-62: The current extraction of tasks silently treats
present-but-wrong-shaped "tasks" or "pipeline" values as empty by chaining
.get(...).and_then(Value::as_object). Change the logic around the tasks
variable: first check whether json.get("tasks") or json.get("pipeline") exists;
if it exists but Value::as_object() returns None, return a descriptive Err (or
emit a warning) indicating the key is present but not an object and include the
actual JSON type; otherwise proceed to map the object entries using
parse_task_config as before. Use the same symbols (json.get("tasks"),
json.get("pipeline"), Value::as_object, parse_task_config, tasks) so reviewers
can locate and update the code and error message consistently.
In `@tests/docker_noinit/test_noinit.sh`:
- Around line 396-403: The redirection on the dela allow-command invocation
hides stderr (echo "2" | dela allow-command test-a >/dev/null 2>&1), so failures
are opaque; update the test to preserve or capture stderr like Test 22c does:
send stdout to /dev/null but redirect stderr to a temporary file (or remove the
2>&1), then in the failure branch cat that stderr file before exiting (or echo
it), and clean up the temp file; refer to the command symbol allow-command and
the test harness pattern used in Test 22c for the exact capture-and-dump
approach.
- Around line 204-211: The test currently asserts an exact error string via
duplicate_output and grep which is brittle; change the check to assert stable
substrings instead (e.g., verify duplicate_output contains "multiple tasks" AND
"build" AND "shared"). Locate the duplicate_output assignment and the subsequent
if that echoes "$duplicate_output" | grep -q 'Found multiple tasks (build)
included by "shared"', and replace that single grep with either chained greps or
a single combined grep/egrep that checks for the three stable tokens ("multiple
tasks", "build", "shared") so the test passes even if wording/quoting changes.
- Around line 116-168: The test fixture declares a non-optional include key
"missing-required" but the discovery code in src/task_discovery.rs (around the
include resolution logic) silently skips missing include files; update the test
fixture in tests/docker_noinit/test_noinit.sh by adding optional: true for the
"missing-required: ./missing" entry in the generated
/home/testuser/task_include_project/Taskfile.yml (or rename the key to reflect
optionality) so the Taskfile matches actual behavior; alternatively, if you
prefer to enforce the original intent, implement validation in the include
resolution path in src/task_discovery.rs to emit an error or fail when a
resolved include file is missing and its optional flag is false.
---
Duplicate comments:
In `@tests/docker_noinit/test_noinit.sh`:
- Around line 451-464: Replace the bare `dela list > turbo_root_list_output.txt
2> turbo_root_stderr.txt` (and the analogous invocation in the later block) with
the same `if ! ...; then` pattern used in Tests 6b and 22c so that a non-zero
exit does not abort the script before diagnostics are printed; specifically,
wrap the `dela list` invocation that writes to `turbo_root_list_output.txt` and
`turbo_root_stderr.txt` (and the second block) in `if ! dela list > ... 2> ...;
then` and inside the then branch print/cat `turbo_root_stderr.txt` and exit with
failure so the stderr is preserved for debugging.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: cfbec451-4d20-4ad2-8e09-db70e8a43c09
📒 Files selected for processing (2)
src/parsers/parse_turbo_json.rstests/docker_noinit/test_noinit.sh
There was a problem hiding this comment.
♻️ Duplicate comments (5)
src/task_discovery.rs (3)
609-611:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRequired Taskfile includes are still silently skipped.
Mirror of the Makefile case: Lines 609–611 drop missing includes unconditionally, ignoring
TaskfileInclude.optional. A required (non-optional: true) include that resolves to a non-existent file produces a partial task set while the Taskfile is reported asParsed.The new test
test_discover_tasks_with_missing_required_taskfile_include(Lines 2632–2673) only exercises an explicitly-optional: trueinclude, so the required path is untested. If the contract is "missing required includes must error," please add coverage there too.🔧 Suggested fix
- if !resolved_include.is_file() { - continue; - } + if !resolved_include.is_file() { + if include.optional { + continue; + } + + let error = format!( + "Missing included Taskfile '{}'", + resolved_include.display() + ); + traversal.include_errors.push(error.clone()); + if first_error.is_none() { + first_error = Some(error); + } + continue; + }Also applies to: 2632-2673
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/task_discovery.rs` around lines 609 - 611, The code currently skips non-existent includes unconditionally by continuing when !resolved_include.is_file(); change this to check the TaskfileInclude.optional flag (from the parsed include struct) and treat missing required includes as an error instead of silently continuing: if include.optional is true, continue as before, but if include.optional is false return/propagate a parsing/discovery error (so the Taskfile is not reported as Parsed). Update the task discovery path that uses resolved_include and TaskfileInclude.optional accordingly and ensure the new test (test_discover_tasks_with_missing_required_taskfile_include) or equivalent covers the required (non-optional) missing-include case.
384-386:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRequired Makefile includes are still silently skipped.
The same issue raised previously remains: the missing-include branch at Lines 384–386 unconditionally
continues, ignoringinclude.optional. As a result,include missing.mk(required) and-include missing.mk(optional) behave identically — the file is silently dropped and the Makefile is reported asParsed.The new test
test_discover_tasks_with_missing_required_included_makefile(Lines 1751–1769) currently encodes this defect as "expected behavior" by assertingdiscovered.errors.is_empty()andParsedstatus. Once the fix lands, that test (and the corresponding integration test intest_noinit.shTest 22c) will need to expect a diagnostic instead.🔧 Suggested fix
- if !resolved_include.is_file() { - continue; - } + if !resolved_include.is_file() { + if include.optional { + continue; + } + + let error = format!( + "Missing included makefile '{}'", + resolved_include.display() + ); + include_errors.push(error.clone()); + if first_error.is_none() { + first_error = Some(error); + } + continue; + }Also applies to: 1751-1769
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/task_discovery.rs` around lines 384 - 386, The loop that currently does "if !resolved_include.is_file() { continue; }" must respect the include's optional flag: check include.optional and only continue when true; when include.optional is false, record a diagnostic/error into the discovery result (e.g., push a meaningful error into discovered.errors) and mark the Makefile parse as failed (not "Parsed") so callers see the missing required include; locate the check around resolved_include and update the branch to use include.optional, writing a clear error referencing the missing include path/name so tests like test_discover_tasks_with_missing_required_included_makefile will surface the diagnostic.
369-414:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftMake include order: first-win deduplication still inverts GNU Make semantics.
Same concern as previously raised.
collect_makefile_tasks_recursivecollects all targets from the current file (Lines 369–377), then traverses includes (Lines 381–408). Withseen_task_namesrejecting later inserts, the earliest-encountered definition wins. In GNU Make, includes are parsed where they appear and later definitions override earlier ones, so when both the root Makefile and an included file define the same target,definition_pathand metadata will currently point at the wrong recipe.This isn't exercised by
test_duplicate_task_names_from_included_makefile_use_definition_path(different target names per file). Adding a test where the root and an included makefile both definebuild(with the include appearing before the root definition vs. after) would lock down the chosen semantics.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/task_discovery.rs` around lines 369 - 414, The current logic in collect_makefile_tasks_recursive parses and adds all tasks from the current file (parse_makefile::parse + current_source.apply_to_task -> collected_tasks) before recursing into includes and uses seen_task_names to drop later duplicates, which makes earlier definitions win; change to GNU Make semantics by making deduplication "last-win" and by treating includes in parse-order: replace seen_task_names with a HashMap<String, usize> (or similar) mapping task name to its index in collected_tasks so when a task with an existing name is encountered you overwrite the existing collected_tasks[index] (updating its definition_path/metadata), and ensure includes from parse_makefile::extract_include_directives are processed at the correct point relative to tasks (i.e., traverse includes where they appear in the file or at least before applying tasks that should override included definitions); update references to seen_task_names, collected_tasks, and collect_makefile_tasks_recursive accordingly.tests/docker_noinit/test_noinit.sh (2)
113-216: 🧹 Nitpick | 🔵 Trivial | 💤 Low valueTest fixtures
task_include_project,task_include_duplicates, andmake_include_projectare not cleaned up.Tests 6b, 6c, and 22c each create a top-level project under
/home/testuser/and leave it (along withtask_include_list.txt,make_include_list.txt, etc.) behind. The Docker harness is throwaway, but residual fixtures still cross-contaminate later assertions if they grep over/home/testuser/.config/dela/allowlist.tomlor rundela listfrom a parent path. Adding arm -rfafter each block (mirroring Test 23's cleanup at Line 842) keeps state isolation predictable.Also applies to: 547-616
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/docker_noinit/test_noinit.sh` around lines 113 - 216, Tests 6b and 6c leave behind fixtures and temp files (task_include_project, task_include_duplicates, make_include_project and files like task_include_list.txt, make_include_list.txt) which can contaminate later tests; add cleanup commands to remove those directories and files at the end of each test block (after the assertions for Test 6b, 6c and the corresponding make include block), e.g. mirror the pattern used in the existing Test 23 cleanup by appending rm -rf /home/testuser/task_include_project /home/testuser/task_include_duplicates /home/testuser/make_include_project and remove any generated list/stderr files (task_include_list.txt, task_include_stderr.txt, make_include_list.txt, etc.) so state is isolated between tests; ensure the cleanup runs regardless of success by placing it just before changing back to /home/testuser/test_project or in a trap if necessary.
460-498:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
dela listandallow-commandinvocations bypass the failure-handler underset -e.The pattern fixed previously at Lines 150 and 570 reappears here:
- Lines 460, 486:
dela list > … 2> …runs as a top-level statement, so underset -ea non-zero exit aborts the script before the subsequent[ -s … ]/catblocks ever execute. The "✗ dela emitted errors …" diagnostics will never fire on actual list failure.- Lines 475, 607:
echo "2" | dela allow-command … >/dev/null 2>&1discards stderr entirely; on failure underset -ethe user sees nothing but the script aborting.🔧 Suggested fix pattern
-dela list > turbo_root_list_output.txt 2> turbo_root_stderr.txt +if ! dela list > turbo_root_list_output.txt 2> turbo_root_stderr.txt; then + echo "${RED}✗ dela list failed at the turbo repo root${NC}" + cat turbo_root_stderr.txt + exit 1 +fi-echo "2" | dela allow-command web-lint >/dev/null 2>&1 +web_lint_stderr=$(mktemp) +if ! echo "2" | dela allow-command web-lint >/dev/null 2>"$web_lint_stderr"; then + echo "${RED}✗ allow-command failed for web-lint${NC}" + cat "$web_lint_stderr" + rm -f "$web_lint_stderr" + exit 1 +fi +rm -f "$web_lint_stderr"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/docker_noinit/test_noinit.sh` around lines 460 - 498, The dela list and echo | dela allow-command calls can abort the script under set -e before your stderr checks run and you also discard allow-command errors; change them to capture exit status and stderr without causing an immediate exit (e.g. run the command redirecting stdout/stderr to files and suffix with || true, or run inside if ...; then ... fi) so the subsequent "[ -s turbo_root_stderr.txt ]"/"[ -s turbo_package_stderr.txt ]" checks and the allowlist file inspection still execute; specifically update the invocations of "dela list" that write turbo_root_list_output.txt/turbo_root_stderr.txt and turbo_package_list_output.txt/turbo_package_stderr.txt and the "echo \"2\" | dela allow-command" that currently hides stderr (and writes to /home/testuser/.config/dela/allowlist.toml) so you preserve/errors capture and then test the captured exit code or stderr content instead of letting set -e abort.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@src/task_discovery.rs`:
- Around line 609-611: The code currently skips non-existent includes
unconditionally by continuing when !resolved_include.is_file(); change this to
check the TaskfileInclude.optional flag (from the parsed include struct) and
treat missing required includes as an error instead of silently continuing: if
include.optional is true, continue as before, but if include.optional is false
return/propagate a parsing/discovery error (so the Taskfile is not reported as
Parsed). Update the task discovery path that uses resolved_include and
TaskfileInclude.optional accordingly and ensure the new test
(test_discover_tasks_with_missing_required_taskfile_include) or equivalent
covers the required (non-optional) missing-include case.
- Around line 384-386: The loop that currently does "if
!resolved_include.is_file() { continue; }" must respect the include's optional
flag: check include.optional and only continue when true; when include.optional
is false, record a diagnostic/error into the discovery result (e.g., push a
meaningful error into discovered.errors) and mark the Makefile parse as failed
(not "Parsed") so callers see the missing required include; locate the check
around resolved_include and update the branch to use include.optional, writing a
clear error referencing the missing include path/name so tests like
test_discover_tasks_with_missing_required_included_makefile will surface the
diagnostic.
- Around line 369-414: The current logic in collect_makefile_tasks_recursive
parses and adds all tasks from the current file (parse_makefile::parse +
current_source.apply_to_task -> collected_tasks) before recursing into includes
and uses seen_task_names to drop later duplicates, which makes earlier
definitions win; change to GNU Make semantics by making deduplication "last-win"
and by treating includes in parse-order: replace seen_task_names with a
HashMap<String, usize> (or similar) mapping task name to its index in
collected_tasks so when a task with an existing name is encountered you
overwrite the existing collected_tasks[index] (updating its
definition_path/metadata), and ensure includes from
parse_makefile::extract_include_directives are processed at the correct point
relative to tasks (i.e., traverse includes where they appear in the file or at
least before applying tasks that should override included definitions); update
references to seen_task_names, collected_tasks, and
collect_makefile_tasks_recursive accordingly.
In `@tests/docker_noinit/test_noinit.sh`:
- Around line 113-216: Tests 6b and 6c leave behind fixtures and temp files
(task_include_project, task_include_duplicates, make_include_project and files
like task_include_list.txt, make_include_list.txt) which can contaminate later
tests; add cleanup commands to remove those directories and files at the end of
each test block (after the assertions for Test 6b, 6c and the corresponding make
include block), e.g. mirror the pattern used in the existing Test 23 cleanup by
appending rm -rf /home/testuser/task_include_project
/home/testuser/task_include_duplicates /home/testuser/make_include_project and
remove any generated list/stderr files (task_include_list.txt,
task_include_stderr.txt, make_include_list.txt, etc.) so state is isolated
between tests; ensure the cleanup runs regardless of success by placing it just
before changing back to /home/testuser/test_project or in a trap if necessary.
- Around line 460-498: The dela list and echo | dela allow-command calls can
abort the script under set -e before your stderr checks run and you also discard
allow-command errors; change them to capture exit status and stderr without
causing an immediate exit (e.g. run the command redirecting stdout/stderr to
files and suffix with || true, or run inside if ...; then ... fi) so the
subsequent "[ -s turbo_root_stderr.txt ]"/"[ -s turbo_package_stderr.txt ]"
checks and the allowlist file inspection still execute; specifically update the
invocations of "dela list" that write
turbo_root_list_output.txt/turbo_root_stderr.txt and
turbo_package_list_output.txt/turbo_package_stderr.txt and the "echo \"2\" |
dela allow-command" that currently hides stderr (and writes to
/home/testuser/.config/dela/allowlist.toml) so you preserve/errors capture and
then test the captured exit code or stderr content instead of letting set -e
abort.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b731a172-8e5c-4662-80d5-de790bd187ec
📒 Files selected for processing (3)
src/parsers/parse_turbo_json.rssrc/task_discovery.rstests/docker_noinit/test_noinit.sh
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation