feat: upgrade all 35 skills to A grade (90+)#6
Conversation
Standalone skills (30): all upgraded from B (80-89) to A (90-96) Plugin skills (5): all upgraded from B (81-89) to A (91-96) Changes applied across all skills: - Add frontmatter: compatible-with, tags fields - Fix first-person voice → imperative voice - Expand stub sections to meet enterprise quality bar - Extract long sections (Examples, Error Handling) to references/ - Get all SKILL.md files under 150 lines via references/ extraction - Ensure descriptions have "Use when" and "Trigger with" patterns - Add argument-hint where $ARGUMENTS is used New reference files created: ~45 files across references/ directories Validator enhancement: accept resources/ alongside references/ Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
/gemini review |
|
/gemini summary |
CI Feedback 🧐A test triggered by this PR failed. Here is an AI-generated analysis of the failure:
|
📝 WalkthroughWalkthroughThis PR systematizes Nixtla skill documentation across 30+ skills and plugins by adding Claude Code compatibility metadata ( Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoUpgrade all 35 skills to A grade (90+) with metadata and reference documentation
WalkthroughsDescription• Upgrade all 35 skills (30 standalone + 5 plugin) from B grade (80-89) to A grade (90+) on enterprise validator • Add frontmatter metadata fields (compatible-with: claude-code, tags, argument-hint) to all skills for improved discoverability and integration • Extract ~80 long sections, examples, and detailed documentation into dedicated references/ files to improve readability and maintainability • Standardize documentation structure across all skills with consistent formatting, inline descriptions, and reference links • Simplify step-by-step instructions by removing verbose explanations and consolidating related steps • Convert error handling sections to table format for better clarity and troubleshooting • Add "When to use" and "Trigger phrases" sections to skill overviews for better context • Update 3 plugin skills to version 1.1.0 with enhanced documentation • Create comprehensive reference documentation including examples, error handling guides, output formats, and resource materials Diagramflowchart LR
A["35 Skills<br/>B Grade 80-89"] -->|"Add Metadata<br/>compatible-with, tags,<br/>argument-hint"| B["Skills with<br/>Frontmatter"]
A -->|"Extract Long Sections<br/>~80 Reference Files"| C["Condensed Skill Docs<br/>with Links"]
B --> D["Standardized<br/>Documentation"]
C --> D
D -->|"Enterprise Validation"| E["A Grade Skills<br/>90+ Score"]
F["Examples<br/>Error Handling<br/>Outputs<br/>Resources"] -->|"New Reference<br/>Documentation"| C
File Changes1. 003-skills/.claude/skills/nixtla-mcp-server-builder/SKILL.md
|
Code Review by Qodo
1. nixtla-test-generator description non-L4
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on enhancing the quality, organization, and usability of the existing skills. It includes upgrading skills to A grade, adding metadata for better management, refactoring code for improved readability, and enhancing user guidance within the skills. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a large-scale refactoring across 35 skill definition files. The changes primarily focus on improving the structure and readability of the skills by extracting lengthy sections into separate reference files, adding metadata like tags and compatible-with, and making the content more concise and consistent. This is a significant improvement for the maintainability of these skills. I have one minor suggestion for one of the new reference files.
| | Mean sMAPE | 13.5% | 12.3% | Theta | | ||
| | Median sMAPE | 12.9% | 11.8% | Theta | | ||
| | Std Dev | 5.1% | 4.2% | Theta | | ||
| | Series Won | 18/50 | 32/50 | Theta | |
There was a problem hiding this comment.
The 'Winner' column in this table is missing the checkmark (✓) that was present in the original version of this table in SKILL.md. Adding it back would improve clarity by visually highlighting the winning model for each metric.
| | Mean sMAPE | 13.5% | 12.3% | Theta | | |
| | Median sMAPE | 12.9% | 11.8% | Theta | | |
| | Std Dev | 5.1% | 4.2% | Theta | | |
| | Series Won | 18/50 | 32/50 | Theta | | |
| | Mean sMAPE | 13.5% | 12.3% | Theta ✓ | | |
| | Median sMAPE | 12.9% | 11.8% | Theta ✓ | | |
| | Std Dev | 5.1% | 4.2% | Theta ✓ | | |
| | Series Won | 18/50 | 32/50 | Theta ✓ | |
| --- | ||
| name: nixtla-test-generator | ||
| description: Generate comprehensive pytest test suites from PRD functional requirements with fixtures, parameterization, and coverage tracking. Use when creating tests for new plugins, validating PRD requirements, or scaffolding test infrastructure. Trigger with 'generate tests from PRD', 'create test suite', or 'scaffold pytest tests'. | ||
| description: "Generate comprehensive pytest test suites from PRD functional requirements with fixtures, parameterization, and coverage tracking. Use when creating tests for new plugins, validating PRD requirements, or scaffolding test infrastructure. Trigger with 'generate tests from PRD', 'create test suite', or 'scaffold pytest tests'." |
There was a problem hiding this comment.
1. nixtla-test-generator description non-l4 📘 Rule violation ⚙ Maintainability
The updated description frontmatter exceeds the 100–300 character requirement and also omits the required domain keyword (e.g., nixtla, timegpt, forecast, time series, statsforecast). This breaks the mandatory L4 quality standard for skill discoverability/validation.
Agent Prompt
## Issue description
The skill frontmatter `description` for `nixtla-test-generator` does not meet the mandatory L4 requirements: it is longer than 300 characters and does not include a required domain keyword.
## Issue Context
Compliance requires skill descriptions to be 100–300 characters, include “Use when” and “Trigger with”, and contain a domain keyword such as `nixtla`, `timegpt`, `forecast`, `time series`, or `statsforecast`.
## Fix Focus Areas
- 003-skills/.claude/skills/nixtla-test-generator/SKILL.md[3-3]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
|
|
||
| **.env.example**: | ||
| ```bash | ||
| NIXTLA_API_KEY=nixak-your-api-key-here |
There was a problem hiding this comment.
2. Example api key in .env.example 📘 Rule violation ⛨ Security
A realistic-looking API key value (nixak-...) is included in documentation as an .env.example line, which can be mistaken for or replaced by a real secret and committed. This violates the requirement to avoid committed secret values in code/docs.
Agent Prompt
## Issue description
Documentation includes a key-like literal `NIXTLA_API_KEY=nixak-your-api-key-here` in an `.env.example` snippet. This can be interpreted as (or accidentally replaced with) a real secret and committed.
## Issue Context
Compliance requires that API keys/credentials are not present as literals in committed code/docs; examples should use neutral placeholders (e.g., `your_api_key_here`) or blank values.
## Fix Focus Areas
- 003-skills/.claude/skills/nixtla-mcp-server-builder/references/server-template.md[156-156]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| compatible-with: claude-code | ||
| tags: [nixtla, time-series, forecasting, arbitrage, prediction-markets] | ||
| allowed-tools: "Read,Write,Bash(python:*),Glob,Grep,WebFetch" | ||
| argument-hint: "<event-keyword>" |
There was a problem hiding this comment.
3. Nonstandard frontmatter fails ci 🐞 Bug ✓ Correctness
The new frontmatter keys compatible-with (and argument-hint in at least one skill) are not in validate_skills_v2.py’s allowed field set, so they generate warnings as “Non-standard field”. CI runs the validator with --fail-on-warn, so these warnings become build-breaking failures.
Agent Prompt
## Issue description
CI will fail because `compatible-with` and `argument-hint` are emitted as `[frontmatter] Non-standard field` warnings by `004-scripts/validate_skills_v2.py`, and the workflow runs validation with `--fail-on-warn`.
## Issue Context
- `OPTIONAL_FIELDS` does not include `compatible-with` or `argument-hint`.
- Unknown fields generate warnings.
- CI treats warnings as errors.
## Fix Focus Areas
- Add the new keys to the validator’s `OPTIONAL_FIELDS` (or migrate them under an allowed field like `metadata` across all skills).
- (Optional) Add type validation for these fields (e.g., string enum for `compatible-with`, string for `argument-hint`).
### Fix Focus Areas (locations)
- 004-scripts/validate_skills_v2.py[54-67]
- 004-scripts/validate_skills_v2.py[404-410]
- .github/workflows/skills-validation.yml[29-31]
- 003-skills/.claude/skills/nixtla-arbitrage-detector/SKILL.md[1-11] (example skill using both keys)
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
003-skills/.claude/skills/nixtla-experiment-architect/SKILL.md (1)
1-117:⚠️ Potential issue | 🟡 MinorAdd language identifiers to fenced code blocks missing them.
Code blocks at lines 39, 61, 69, 75, and 82 lack language identifiers. Add appropriate language tags (e.g.,
bash,python) to satisfy linting requirements for the A-grade validation target.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-experiment-architect/SKILL.md` around lines 1 - 117, Several fenced code blocks lack language identifiers; update the blocks under the headings "Installation", "Step 2: Generate Configuration", "Step 3: Scaffold Experiment", "Step 4: Run Experiment", and "Step 5: Review Results" to include proper tags (use ```bash for shell commands like pip and CLI invocations, and ```python for Python snippets/commands). Locate the corresponding fenced blocks shown in the SKILL.md content (the pip install block and the example CLI/python commands) and add the appropriate language marker to each fence so linting passes.
♻️ Duplicate comments (9)
003-skills/.claude/skills/nixtla-exogenous-integrator/SKILL.md (1)
7-8:⚠️ Potential issue | 🔴 CriticalCritical: Pipeline validator rejects
compatible-withfield.The Claude Skills Validation pipeline reports a warning for the non-standard frontmatter field
compatible-with. This validation failure affects multiple skills in this PR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-exogenous-integrator/SKILL.md` around lines 7 - 8, The frontmatter uses a non-standard key "compatible-with" which the pipeline validator rejects; edit SKILL.md to remove or rename that field to a supported metadata key (e.g., remove the line entirely or replace with an accepted key like "compatible_with" or include the compatibility info elsewhere such as in the description), updating the frontmatter block that contains the "compatible-with" entry and leaving "tags" intact; ensure the resulting frontmatter conforms to the validator schema so the pipeline no longer flags the file.003-skills/.claude/skills/nixtla-anomaly-detector/SKILL.md (1)
8-9:⚠️ Potential issue | 🔴 CriticalCritical: Pipeline validator rejects
compatible-withfield.The Claude Skills Validation pipeline reports a warning for the non-standard frontmatter field
compatible-with. This is the same issue identified innixtla-arbitrage-detector/SKILL.mdand appears across multiple skills in this PR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-anomaly-detector/SKILL.md` around lines 8 - 9, The pipeline rejects the non-standard frontmatter key "compatible-with" in SKILL.md; remove that field (or rename/migrate it to a supported field such as including compatibility info in the description or tags) so the frontmatter contains only standard keys (e.g., title, tags, description). Locate the SKILL.md frontmatter that defines "compatible-with" and delete that line or move its content into the skill body to satisfy the validator.003-skills/.claude/skills/nixtla-timegpt-finetune-lab/SKILL.md (1)
8-8:⚠️ Potential issue | 🔴 CriticalCritical: Non-standard frontmatter field flagged by validator.
Same issue as in nixtla-market-risk-analyzer: the validator warns that
compatible-withis non-standard. This field appears in all reviewed skills and blocks achieving the stated A grade validation goal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-timegpt-finetune-lab/SKILL.md` at line 8, The frontmatter uses a non-standard field "compatible-with" which the validator flags; update the SKILL.md frontmatter to remove or rename that field to a validator-accepted key (e.g., remove "compatible-with: claude-code" or replace it with a standard field used across the repo), ensuring the "compatible-with" line is deleted or renamed wherever it appears in SKILL.md (and other skill files) so the validator no longer flags it.003-skills/.claude/skills/nixtla-timegpt-lab/SKILL.md (1)
8-8:⚠️ Potential issue | 🔴 CriticalCritical: Non-standard frontmatter field flagged by validator.
Same
compatible-withvalidation issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-timegpt-lab/SKILL.md` at line 8, The frontmatter field "compatible-with" in SKILL.md is non-standard and triggers the validator; remove or replace it with an approved field (e.g., use a standard "compatibility" or documented metadata key) so the frontmatter conforms to the validator. Locate the top YAML/frontmatter in SKILL.md, delete the "compatible-with: claude-code" line or rename it to the project's supported metadata key, and ensure the rest of the frontmatter remains valid YAML.003-skills/.claude/skills/nixtla-schema-mapper/SKILL.md (1)
8-8:⚠️ Potential issue | 🔴 CriticalCritical: Non-standard frontmatter field flagged by validator.
Same
compatible-withvalidation issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-schema-mapper/SKILL.md` at line 8, The frontmatter in SKILL.md uses a non-standard field name "compatible-with" which the validator flags; update the frontmatter to the validator's expected field (e.g., rename "compatible-with" to the standard "compatible_with" or remove the field if not required) so the validator accepts the file, and ensure the value "claude-code" remains unchanged.003-skills/.claude/skills/nixtla-prod-pipeline-generator/SKILL.md (1)
8-8:⚠️ Potential issue | 🔴 CriticalCritical: Non-standard frontmatter field flagged by validator.
Same
compatible-withvalidation issue across all skills in this PR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-prod-pipeline-generator/SKILL.md` at line 8, The SKILL.md contains a non-standard frontmatter field "compatible-with" which the validator rejects; remove this field (or rename it to a project-approved metadata key) from SKILL.md (look for the literal "compatible-with:" in the file) so the frontmatter only uses allowed keys, and update any documentation or tooling that relied on "compatible-with" to use the approved metadata name instead.003-skills/.claude/skills/nixtla-prd-to-code/SKILL.md (1)
8-8:⚠️ Potential issue | 🔴 CriticalCritical: Non-standard frontmatter field flagged by validator.
Same
compatible-withvalidation issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-prd-to-code/SKILL.md` at line 8, The frontmatter contains a non-standard key "compatible-with: claude-code" which the validator rejects; remove that line or replace it with an approved key (e.g., a validator-accepted field such as "compatibility" or "compatible_with" if your validator supports underscores) in SKILL.md so the frontmatter conforms to the project's schema—look for the "compatible-with" token in SKILL.md and delete or rename it accordingly.003-skills/.claude/skills/nixtla-test-generator/SKILL.md (1)
8-8:⚠️ Potential issue | 🔴 CriticalCritical: Non-standard frontmatter field flagged by validator.
Same
compatible-withvalidation issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-test-generator/SKILL.md` at line 8, The frontmatter field "compatible-with" is non-standard and triggers the validator; open SKILL.md and replace the "compatible-with: claude-code" line in the frontmatter with the standard field name (for example "compatible_with: claude-code") or remove the field if not required, ensuring you update the exact token "compatible-with" so the validator no longer flags it.003-skills/.claude/skills/nixtla-usage-optimizer/SKILL.md (1)
8-8:⚠️ Potential issue | 🔴 CriticalCritical: Non-standard frontmatter field flagged by validator.
Same
compatible-withvalidation issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-usage-optimizer/SKILL.md` at line 8, The frontmatter key "compatible-with" in SKILL.md is non-standard and causing validator errors; remove this key or rename it to a supported field (for example "compatible_with" or include the compatibility info in a supported field like "tags" or the document body). Locate the "compatible-with" frontmatter entry in SKILL.md and either delete that line or replace it with the validator-approved field name and format.
🟡 Minor comments (17)
005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/examples.md-46-49 (1)
46-49:⚠️ Potential issue | 🟡 MinorUse dry-run as the default first step before live Slack posting.
Line 48 executes a live post immediately. Recommend showing
--dry-runfirst, then the live command, to reduce accidental channel noise during normal usage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/examples.md` around lines 46 - 49, Update the example sequence so the default first step uses the --dry-run flag to avoid accidental Slack posts: replace or precede the current live command `python -m nixtla_search_to_slack --topic nixtla-core` with a dry-run invocation `python -m nixtla_search_to_slack --topic nixtla-core --dry-run`, then show the original command as the follow-up step to perform the actual post; ensure both commands remain in the examples.md content so readers see the safe-first workflow.005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/search-strategy.md-53-57 (1)
53-57:⚠️ Potential issue | 🟡 MinorProvider quota/pricing values are likely to drift quickly.
Lines 53–57 hard-code limits and monthly pricing. Suggest replacing exact numbers with “see current provider pricing/limits” links (or at minimum add an “as of ” stamp) to avoid stale operational guidance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/search-strategy.md` around lines 53 - 57, The quoted provider quota/pricing lines ("Brave Search", "Google Custom Search", "Bing Search", "SerpAPI") hard-code limits and pricing which will become stale; update that bullet list in search-strategy.md to remove exact numeric quotas/prices and instead either (a) replace each numeric entry with a short phrase like "see current provider pricing/limits" linking to the provider's official pricing page, or (b) if a link is not added, append an "as of <YYYY-MM-DD>" timestamp after each provider to indicate when the numbers were valid; ensure the change is applied to the existing provider bullets so the document no longer presents fixed monthly/day quotas without a date or authoritative link.003-skills/.claude/skills/nixtla-skills-index/SKILL.md-38-40 (1)
38-40:⚠️ Potential issue | 🟡 MinorAdd a language to the Step 1 fenced block.
The fence at Line 38 is missing a language identifier (MD040).
Proposed fix
-``` +```text Glob pattern: .claude/skills/nixtla-*/SKILL.md</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-skills-index/SKILL.mdaround lines 38 - 40,
The fenced code block containing "Glob pattern:
.claude/skills/nixtla-*/SKILL.md" in the Step 1 section is missing a language
identifier (MD040); fix it by updating the opening fence fromto include a language token (for example, usetext or ```md) so the block reads likemissing language.003-skills/.claude/skills/nixtla-skills-index/references/examples.md-8-24 (1)
8-24:⚠️ Potential issue | 🟡 MinorAdd a language to the first fenced block.
This fence is missing a language identifier and triggers MD040.
Proposed fix
-``` +```text Nixtla Skills Index (8 skills installed): ... -``` +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-skills-index/references/examples.md` around lines 8 - 24, The first fenced code block that begins with "Nixtla Skills Index (8 skills installed):" is missing a language identifier and triggers MD040; update its opening fence from ``` to ```text (or another suitable language) so it reads ```text and keep the closing fence as ``` to satisfy the Markdown linter (look for the block in examples.md containing the list of Mode/Utility/Infrastructure skills).003-skills/.claude/skills/nixtla-skills-index/references/examples.md-31-36 (1)
31-36:⚠️ Potential issue | 🟡 MinorAdd a language to the second fenced block.
This fence is also missing a language identifier (MD040).
Proposed fix
-``` +```text Use: nixtla-schema-mapper ... -``` +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-skills-index/references/examples.md` around lines 31 - 36, The second fenced code block containing "Use: nixtla-schema-mapper" is missing a language identifier; update that opening fence to include a language (e.g., change ``` to ```text) so the block is properly annotated (affects the fenced block that begins with "Use: nixtla-schema-mapper" and its closing ```).005-plugins/nixtla-baseline-lab/skills/nixtla-baseline-review/resources/examples.md-8-8 (1)
8-8:⚠️ Potential issue | 🟡 MinorAdd explicit language to fenced code blocks to satisfy MD040.
At Line 8, Line 22, and Line 45, fenced blocks should declare a language (e.g.,
text) to pass markdownlint strict checks.Suggested diff
-``` +```text ... -``` +``` -``` +```text ... -``` +``` -``` +```text ... -``` +```Also applies to: 22-22, 45-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@005-plugins/nixtla-baseline-lab/skills/nixtla-baseline-review/resources/examples.md` at line 8, The markdown linter MD040 is failing because three fenced code blocks in examples.md are missing an explicit language; update each opening fence from ``` to include a language label (e.g., ```text) for all three blocks so they read like ```text — this fixes the MD040 rule while leaving the block contents unchanged.005-plugins/nixtla-baseline-lab/skills/nixtla-baseline-review/resources/examples.md-54-54 (1)
54-54:⚠️ Potential issue | 🟡 MinorFix broken relative path in metric interpretation reference.
At Line 54,
resources/METRIC_INTERPRETATION.mdresolves toresources/resources/...from this file location, so the link target is incorrect.Suggested diff
-For more details, see resources/METRIC_INTERPRETATION.md +For more details, see METRIC_INTERPRETATION.md🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@005-plugins/nixtla-baseline-lab/skills/nixtla-baseline-review/resources/examples.md` at line 54, The link text "For more details, see resources/METRIC_INTERPRETATION.md" is using a wrong relative path from examples.md (which itself is in the resources directory); update that reference to the correct relative path (e.g., "./METRIC_INTERPRETATION.md" or "METRIC_INTERPRETATION.md") so it does not resolve to resources/resources/...; locate the string in examples.md and replace it accordingly.003-skills/.claude/skills/nixtla-skills-bootstrap/references/examples.md-10-17 (1)
10-17:⚠️ Potential issue | 🟡 MinorAdd language identifiers to output code fences.
Both output blocks are missing fence languages, which triggers MD040 and may fail strict markdown checks.
Proposed fix
-``` +```text Installing Nixtla Skills... Created .claude/skills/nixtla-timegpt-lab/ Created .claude/skills/nixtla-experiment-architect/ Created .claude/skills/nixtla-schema-mapper/ Created .claude/skills/nixtla-skills-bootstrap/ Installation complete!@@
-+text
Updating Nixtla Skills...
Updated 4 skills to latest version.Also applies to: 26-29
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-skills-bootstrap/references/examples.md` around lines 10 - 17, The Markdown code fences in .claude/skills/nixtla-skills-bootstrap/references/examples.md around the blocks that start with "Installing Nixtla Skills..." and "Updating Nixtla Skills..." are missing language identifiers; change the opening triple-backtick markers (```) for those example output blocks to include the "text" language (i.e., replace ``` with ```text) so the fences become ```text for both the "Installing Nixtla Skills..." block and the "Updating Nixtla Skills..." block.003-skills/.claude/skills/nixtla-skills-bootstrap/SKILL.md-73-76 (1)
73-76:⚠️ Potential issue | 🟡 MinorStep 4 command does not verify
SKILL.mdpresence.The text says “contain valid SKILL.md files,” but the command only lists directories. Add an explicit file existence check so verification matches the instruction.
Proposed fix
Confirm all skill directories are present and contain valid SKILL.md files: ```bash ls -1d .claude/skills/nixtla-* 2>/dev/null | sort +for d in .claude/skills/nixtla-*; do [ -f "$d/SKILL.md" ] || echo "Missing SKILL.md in $d"; done</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-skills-bootstrap/SKILL.mdaround lines 73 -
76, The Step 4 "ls -1d ..." directory listing only checks directories but not
SKILL.md files; update the Step 4 command to iterate the matched nixtla-*
directories and explicitly test for the presence of a SKILL.md file, printing
which directory is missing it (i.e., add a loop that checks each directory for
SKILL.md and emits a "Missing SKILL.md" message when absent); reference the
existing listing command and the SKILL.md filename so the change is easy to
locate and replace in the Step 4 block.</details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-skills-bootstrap/SKILL.md-8-8 (1)</summary><blockquote> `8-8`: _⚠️ Potential issue_ | _🟡 Minor_ **Add `compatible-with` to the canonical frontmatter schema or remove from codebase.** `compatible-with` is actively used in multiple SKILL.md files but is not documented in the official schema (6767-m-DR-STND-claude-skills-frontmatter-schema.md). Either formally include it in the schema with clear specifications or remove it from deployed skills to maintain consistency with the documented specification. <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-skills-bootstrap/SKILL.mdat line 8, The
SKILL frontmatter uses the undocumented field compatible-with in SKILL.md
(nixtla-skills-bootstrap/SKILL.md); update the canonical frontmatter schema
(6767-m-DR-STND-claude-skills-frontmatter-schema.md) to include a formally
specified compatible-with entry (type, allowed values, and semantics) OR remove
all compatible-with occurrences from deployed SKILL.md files (including SKILL.md
in nixtla-skills-bootstrap) to match the documented schema; ensure the chosen
approach is applied consistently across the repo and update any
validation/parsing code that references compatible-with (frontmatter
parser/validator) to reflect the schema change.</details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-forecast-validator/references/examples.md-6-6 (1)</summary><blockquote> `6-6`: _⚠️ Potential issue_ | _🟡 Minor_ **Add language specifiers to CSV code blocks.** Markdownlint flagged six fenced code blocks missing language specifiers. Adding `csv` as the language identifier improves syntax highlighting and tooling support. <details> <summary>🎨 Proposed fix: Add csv language specifiers</summary> ```diff **Input** (historical_metrics.csv): -``` +```csv model,MASE,sMAPE model_A,1.2,0.15Input (current_metrics.csv):
-+csv
model,MASE,sMAPE
model_A,1.8,0.18Apply the same pattern to all six CSV code blocks (lines 6, 12, 23, 32, 38, 49).
Also applies to: 12-12, 23-23, 32-32, 38-38, 49-49
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-forecast-validator/references/examples.md` at line 6, Update the six fenced CSV code blocks in the markdown so they include the language specifier "csv" after the opening backticks; locate the fenced blocks shown in the diff and replace openings like ``` with ```csv for each example (the blocks containing the model,MASE,sMAPE rows), ensuring all six occurrences are updated so tooling and syntax highlighting recognize them.003-skills/.claude/skills/nixtla-mcp-server-builder/references/server-template.md-56-60 (1)
56-60:⚠️ Potential issue | 🟡 MinorMissing import for ValidationError.
The code references
ValidationErroron line 56 but doesn't import it. This would cause aNameErrorat runtime.🔧 Proposed fix
Add the import at the top of the template:
from mcp.server import Server from mcp.server.stdio import stdio_server -from pydantic import BaseModel +from pydantic import BaseModel, ValidationError🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-mcp-server-builder/references/server-template.md` around lines 56 - 60, The handler references ValidationError in the except block for calculate_roi but never imports it; add the appropriate import (e.g., from pydantic import ValidationError) at the top of the template so ValidationError is defined and caught correctly when calculate_roi raises validation errors.003-skills/.claude/skills/nixtla-contract-schema-mapper/references/examples.md-14-22 (1)
14-22:⚠️ Potential issue | 🟡 MinorAdd a language identifier to the output code fence.
At Line 14, the fenced block is missing a language, which triggers MD040.
Proposed fix
-``` +```text Transformed data saved to: nixtla_data.csv Transformation Summary: Series count: 15 Total rows: 4500 Date range: 2024-01-01 to 2024-06-30 Value range: 0.0100 to 0.9900</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-contract-schema-mapper/references/examples.md
around lines 14 - 22, The fenced code block in examples.md (the block that
begins with) lacks a language identifier which triggers MD040; update that fence to include a language token (e.g., "text" or "console") so it readstext (orconsole) before the lines starting "Transformed data saved to: nixtla_data.csv" and close withafter the block, ensuring the code fence now
specifies a language.</details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-market-risk-analyzer/references/examples.md-35-45 (1)</summary><blockquote> `35-45`: _⚠️ Potential issue_ | _🟡 Minor_ **Clarify "conservative" position sizing calculation.** The example labels the sizing as "Conservative position sizing" with `--risk-per-trade 0.01` (1% risk), but the output shows a $12,000 position representing 24% of the $50,000 account. With 1% risk-per-trade ($500 risk on $50k), the position size seems inconsistent. Additionally, a VaR of -8.5% at 95% confidence indicates substantial risk exposure. Consider adding a brief explanation of how the 1% risk-per-trade parameter translates to the 24% position allocation, or adjust the expected output to match truly conservative sizing guidelines. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-market-risk-analyzer/references/examples.md` around lines 35 - 45, The example’s "Conservative position sizing" is inconsistent: the flags to position_sizing.py (especially --risk-per-trade, --account-size) imply 1% risk-per-trade but the expected output shows a $12,000 (24%) position; either clarify the calculation or adjust the output. Update the examples.md entry to either (a) add one line explaining how --risk-per-trade 0.01, asset volatility and VaR combine to produce the 24% allocation (referencing the position_sizing.py calculation logic and the --risk-per-trade flag), or (b) change the expected output to a value consistent with 1% risk on a $50,000 account (and adjust VaR/Volatility Regime accordingly) so the sample matches the script’s real calculation. ``` </details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-mcp-server-builder/references/examples.md-76-82 (1)</summary><blockquote> `76-82`: _⚠️ Potential issue_ | _🟡 Minor_ **Add language specifier to fenced code block.** The backup/update output block lacks a language specifier. <details> <summary>📝 Proposed fix</summary> ```diff **Output**: -``` +```text Existing server detected. Creating backup... ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-mcp-server-builder/references/examples.md
around lines 76 - 82, The fenced code block showing the backup/update output
(the block starting with "Existing server detected. Creating backup...") is
missing a language specifier; update the opening fence fromtotext (or
another appropriate language tag) so the block is rendered as plain text. Locate
the block containing "Existing server detected. Creating backup..." and change
its opening fence accordingly, leaving the block contents unchanged.</details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-mcp-server-builder/references/examples.md-41-50 (1)</summary><blockquote> `41-50`: _⚠️ Potential issue_ | _🟡 Minor_ **Add language specifier to fenced code block.** The dry-run output block lacks a language specifier. <details> <summary>📝 Proposed fix</summary> ```diff **Output**: -``` +```text [DRY RUN] Would generate: ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-mcp-server-builder/references/examples.md
around lines 41 - 50, Update the fenced code block that currently begins withlanguage specifier; replace the opening fence ``` with ```text to make the block ```text ... ```, keeping the block contents (the lines listing generated files) unchanged so the dry-run output is rendered as plain text.003-skills/.claude/skills/nixtla-mcp-server-builder/references/examples.md-14-27 (1)
14-27:⚠️ Potential issue | 🟡 MinorAdd language specifier to fenced code block.
The output code block lacks a language specifier, which can affect syntax highlighting and accessibility. Consider adding
textorconsoleas the language identifier.📝 Proposed fix
**Output**: -``` +```text Parsed PRD: Found 4 MCP tools🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-mcp-server-builder/references/examples.md` around lines 14 - 27, The fenced code block in examples.md (the block starting with "Parsed PRD: Found 4 MCP tools") is missing a language specifier; update the opening fence from ``` to ```text or ```console to enable proper syntax highlighting and accessibility for the entire block (the block that shows Generated mcp_server.py, schemas.py, test_mcp_server.py, plugin.json, supporting files, and the "Server ready! Start with: python mcp_server.py" line).
🧹 Nitpick comments (17)
005-plugins/nixtla-forecast-explainer/skills/nixtla-explain-analyst/SKILL.md (1)
67-67: Use a direct guide URL instead of docs homepage.Line 67 currently points to the docs root, which is ambiguous for operators. Prefer a stable deep link to the exact forecast explanation page.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@005-plugins/nixtla-forecast-explainer/skills/nixtla-explain-analyst/SKILL.md` at line 67, Replace the ambiguous docs root link in SKILL.md ("Nixtla forecast explanation guide: https://docs.nixtla.io/") with a stable deep link to the specific Nixtla forecast explanation page so operators land directly on the relevant guide; update the line to point to the Nixtla forecasting/explanations page (use the official forecast explanation URL on docs.nixtla.io) so the reference is unambiguous and stable.003-skills/.claude/skills/nixtla-skills-index/SKILL.md (1)
59-63: Use “Markdown” capitalization for formatting language references.Minor wording polish for consistency with style tooling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-skills-index/SKILL.md` around lines 59 - 63, Update the SKILL.md content to use the capitalization "Markdown" whenever referring to the formatting language and apply small wording polish for consistency: locate the header/section that currently reads "Output" and the sentence "Formatted markdown listing organized by category:" and change the phrasing to use "Markdown" (e.g., "Formatted Markdown listing organized by category:") and smooth any nearby wording to match the project's style (consistent capitalization and concise wording).003-skills/.claude/skills/nixtla-correlation-mapper/references/error-handling.md (1)
1-22: Consider standardizing error-handling format across skills.The error-handling content is thorough and actionable. However, this file uses a bullet-point format while
nixtla-event-impact-modeler/references/error-handling.mduses a two-column table format. For consistency across the Nixtla skill set, consider adopting a uniform format for allerror-handling.mdfiles.📋 Example: Convert to table format for consistency
## Error Handling -**Error: Input file not found** -- Verify file path with `ls -la` -- Check current directory and use absolute paths - -**Error: Missing required columns** -- Ensure CSV has `unique_id`, `ds`, `y` columns -- Verify column names match exactly (case-sensitive) - -**Error: Insufficient data points** -- Need at least 30 data points per contract for reliable correlations -- Verify data has sufficient time-series history - -**Error: Invalid data format** -- Check that `y` values are numeric (not strings) -- Ensure dates are parseable (ISO format recommended) -- Remove or handle missing values - -**Error: Insufficient contracts** -- Need at least 2 contracts for correlation analysis -- Verify `unique_id` column has multiple distinct values +| Error | Solution | +|-------|----------| +| Input file not found | Verify file path with `ls -la`; check current directory and use absolute paths | +| Missing required columns | Ensure CSV has `unique_id`, `ds`, `y` columns; verify column names match exactly (case-sensitive) | +| Insufficient data points | Need at least 30 data points per contract for reliable correlations; verify data has sufficient time-series history | +| Invalid data format | Check that `y` values are numeric (not strings); ensure dates are parseable (ISO format recommended); remove or handle missing values | +| Insufficient contracts | Need at least 2 contracts for correlation analysis; verify `unique_id` column has multiple distinct values |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-correlation-mapper/references/error-handling.md` around lines 1 - 22, The file uses bullet points for error entries but should match the two-column table format used by nixtla-event-impact-modeler; convert each error section (e.g., "Error: Input file not found", "Error: Missing required columns", "Error: Insufficient data points", "Error: Invalid data format", "Error: Insufficient contracts") into rows of a consistent markdown table with columns like "Error" and "Recommended Action(s)" and ensure exact phrasing for required fields (e.g., `unique_id`, `ds`, `y`) is preserved in the action column; keep the actionable checks (ls -la, absolute paths, numeric `y`, parseable dates, minimum 30 points, >=2 contracts) intact and place them under the actions column for each corresponding error.003-skills/.claude/skills/nixtla-anomaly-detector/references/examples.md (1)
25-42: Consider showing the full output CSV for consistency.Example 2 only describes the result ("Detects trend break at 2023-12-31") while Example 1 shows the complete
anomalies.csvoutput. For consistency and completeness, consider including the full output format here as well.📊 Suggested addition: Show full output CSV
**Command**: ```bash python {baseDir}/scripts/detect_anomalies.py -i sales.csv -o sales_anomalies.csv-Output: Detects trend break at 2023-12-31
+Output (sales_anomalies.csv):
+csv +unique_id,ds,y,anomaly_type +store_1,2023-12-31,150,trend_break +</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-anomaly-detector/references/examples.md
around lines 25 - 42, Example 2 currently only states "Detects trend break at
2023-12-31"; replace that single-line result with the full output CSV block to
match Example 1 by adding a code fence showing sales_anomalies.csv with headers
and the detected row, e.g. include "unique_id,ds,y,anomaly_type" and the row
"store_1,2023-12-31,150,trend_break" so readers running python
detect_anomalies.py see the exact output format.</details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-prod-pipeline-generator/references/examples.md (1)</summary><blockquote> `13-18`: **Add language tag to the output code fence (MD040).** Line 14 should specify a fenced-code language to keep markdownlint clean. <details> <summary>Patch suggestion</summary> ```diff **Output**: -``` +```text Generated: pipelines/forecast_dag.py Schedule: Daily at 6am Tasks: extract -> transform -> forecast -> load -> monitor</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-prod-pipeline-generator/references/examples.md
around lines 13 - 18, The fenced code block under the "Output" section is
missing a language tag (markdownlint MD040); update the opening triple-backtick
for the block that contains "Generated: pipelines/forecast_dag.py" so it
includes a language (e.g., use ```text) to satisfy linting and keep the "Output"
code fence intact.</details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-polymarket-analyst/references/examples.md (1)</summary><blockquote> `12-41`: **Label expected-output fences with a language to satisfy MD040.** Line 13 and Line 34 should use a language tag (e.g., `text`) to avoid markdownlint failures. <details> <summary>Patch suggestion</summary> ```diff **Expected Output**: -``` +```text Contract: "Will candidate X win the 2024 election?" Current Price: $0.4500 Forecast Price: $0.5200 Expected Change: +15.56% Signal: BUY (BULLISH) 95% CI: [0.4800, 0.5600] ``` @@ **Expected Output**: -``` +```text Contract: "Will ETH be above $3000 on Dec 31?" Current Price: $0.6800 Forecast Price: $0.6500 Expected Change: -4.41% Signal: HOLD (NEUTRAL) 95% CI: [0.6000, 0.7000] ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-polymarket-analyst/references/examples.md
around lines 12 - 41, The fenced "Expected Output" blocks in examples.md are
missing a language tag (causing MD040); update both code fences shown under the
two "Expected Output" sections by changing the openingtotext so the
examples are labeled (e.g., the first fenced block under the "Contract: "Will
candidate X win the 2024 election?"" example and the second fenced block under
the ETH example). Ensure both opening fences use ```text and leave the closing003-skills/.claude/skills/nixtla-timegpt-finetune-lab/references/examples.md (1)
15-35: Add language tags to output fenced blocks (markdownlint MD040).Line 16 and Line 30 use unlabeled fenced code blocks; this will keep lint warnings active.
Patch suggestion
**Output**: -``` +```text Fine-tuning job submitted: job_abc123 Model ID saved to: artifacts/finetune_model_id.txt@@
Output:
-+text
Model Comparison:
TimeGPT Zero-Shot: SMAPE=12.3%
TimeGPT Fine-Tuned: SMAPE=8.7%
Improvement: 29.3%🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-timegpt-finetune-lab/references/examples.md` around lines 15 - 35, Update the unlabeled fenced code blocks in the examples to include a language tag (use "text") to satisfy markdownlint MD040: find the fenced blocks that contain the output lines "Fine-tuning job submitted: job_abc123 / Model ID saved to: artifacts/finetune_model_id.txt" (Example 1) and the block starting with "Model Comparison:" (Example 2) and change the opening fences from ``` to ```text for each.003-skills/.claude/skills/nixtla-batch-forecaster/references/examples.md (1)
14-18: Add language identifier to output block.The output code block should have a language identifier for proper rendering.
📝 Proposed fix
**Output**: -``` +```text Batch Forecast Complete Series forecasted: 50/50 Success rate: 100.0%🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-batch-forecaster/references/examples.md` around lines 14 - 18, The output code block lacks a language identifier; update the fenced block in examples.md so the opening fence includes a language label (e.g., change ``` to ```text) for the "Batch Forecast Complete" block so it reads ```text followed by the three lines "Batch Forecast Complete", "Series forecasted: 50/50", and "Success rate: 100.0%" and then the closing ``` to ensure proper rendering.003-skills/.claude/skills/nixtla-mcp-server-builder/references/server-template.md (2)
148-152: Add language identifier to the requirements.txt code block.The fenced code block for
requirements.txtshould specify a language identifier for better rendering and linting compliance.📝 Proposed fix
**requirements.txt**: -``` +```txt mcp>=1.0.0 pydantic>=2.0.0 python-dotenv>=1.0.0</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-mcp-server-builder/references/server-template.md
around lines 148 - 152, The fenced requirements.txt block in server-template.md
lacks a language identifier; change the opening fence fromtotxt in the
requirements.txt code block so the block (the three lines listing mcp, pydantic,
python-dotenv) is annotated for proper rendering and linting.</details> --- `161-170`: **Add language identifier to the directory structure block.** The directory tree should have a language identifier for proper rendering. <details> <summary>📝 Proposed fix</summary> ```diff -``` +```text mcp_server/ ├── mcp_server.py # Main server ├── schemas.py # Validation schemas ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-mcp-server-builder/references/server-template.md
around lines 161 - 170, Update the fenced directory tree block in
server-template.md to include a language identifier by replacing the openingwithtext so the tree (the mcp_server/ listing including mcp_server.py,
schemas.py, test_mcp_server.py, plugin.json, requirements.txt, README.md,
.env.example) is rendered as a text code block; ensure only the opening fence is
changed and the closing ``` remains.</details> </blockquote></details> <details> <summary>003-skills/.claude/skills/nixtla-test-generator/references/examples.md (1)</summary><blockquote> `13-22`: **Add language identifier to output block.** The output code block should have a language identifier for proper rendering. <details> <summary>📝 Proposed fix</summary> ```diff **Output**: -``` +```text Parsed PRD: Found 5 functional requirements, 4 MCP tools Generated tests/test_unit.py (15 test functions) Generated tests/test_integration.py (4 MCP tool tests) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@003-skills/.claude/skills/nixtla-test-generator/references/examples.md
around lines 13 - 22, Update the fenced code block that currently starts withtools" and the generated test file list to use a language identifier (e.g., change the opening fence to ```text) so the block renders correctly; locate the block in examples.md by searching for the exact text lines shown and replace the opening backticks accordingly.003-skills/.claude/skills/nixtla-market-risk-analyzer/references/examples.md (1)
48-68: Add expected outputs for Example 3.Example 3 demonstrates parameter comparison but omits the "Expected output:" section that appears in Examples 1 and 2. Including the expected results for both conservative and aggressive strategies would help users understand the practical impact of different parameter choices.
📋 Suggested addition
Add after line 68:
+ +# Expected outputs comparison: +# Conservative: Position $40,000 (20% of account) +# Aggressive: Position $120,000 (60% of account)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-market-risk-analyzer/references/examples.md` around lines 48 - 68, Add an "Expected output:" block after Example 3 showing the estimated position sizing results for both runs of scripts/position_sizing.py (the conservative and aggressive parameter sets); include concise sample lines for account size, computed position size per trade, estimated dollar risk, and implied volatility/VAR loss for each run so users can compare outcomes (label them "Conservative" and "Aggressive" and reference the same flags used --risk-per-trade, --stop-loss, --target-volatility, --max-var-loss).003-skills/.claude/skills/nixtla-schema-mapper/references/examples.md (1)
26-35: Consider adding generated code for Example 2.Example 1 includes the generated Python code to demonstrate the transformation output, but Example 2 omits it. For consistency and completeness, consider showing the generated code for the SQL source example as well, so users can see how SQL queries are translated into the Nixtla schema transformation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-schema-mapper/references/examples.md` around lines 26 - 35, Add the missing generated Python transformation code and its resulting Nixtla schema output for Example 2 (the SQL source invocation of generate_transform.py with flags --sql, --connection, --id_col, --date_col, --target_col) to match Example 1's format: show the generated Python script or function call that maps the SQL query fields (store_id, sale_date, amount) into Nixtla's schema (id column, datetime column, target), include any inferred dtypes and sample transformed rows, and place it under "Example 2: SQL Source" so readers can see how the SQL query is translated into the Nixtla schema transformation.003-skills/.claude/skills/nixtla-event-impact-modeler/references/examples.md (1)
33-33: Inconsistent--niterparameter usage across examples.Example 1 (line 15) includes
--niter 2000, but Example 2 omits this parameter. If--niteris optional with a default value, consider adding a brief note. If it's required, Example 2 should include it for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-event-impact-modeler/references/examples.md` at line 33, The examples show inconsistent usage of the --niter option for the CLI (python scripts/analyze_impact.py); either add --niter 2000 to the second example command to match the first, or add a short clarifying note in examples.md stating the default value and that --niter is optional; update the second example (the command with --prices configured_prices.csv --events prepared_events.csv) or add a brief parenthetical about the default behavior of --niter so both examples are consistent and clear.003-skills/.claude/skills/nixtla-prd-to-code/references/examples.md (1)
44-54: Clarify the import path setup for Example 2.The example shows
from parse_prd import PRDParser, but ifparse_prd.pylives at{baseDir}/scripts/parse_prd.py, users will need to ensure that directory is in their Python path. Consider adding a brief note about path setup or showing the full import pattern.📝 Suggested clarification
## Example 2: Auto-Populate TodoWrite When used in conversation context, directly populate the Claude todo list. ```python # In Claude Code conversation +import sys +sys.path.insert(0, '{baseDir}/scripts') from parse_prd import PRDParser🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-prd-to-code/references/examples.md` around lines 44 - 54, The example imports parse_prd with from parse_prd import PRDParser but doesn’t explain how Python will find parse_prd.py; update the example text around PRDParser and TodoWrite to show how to make the scripts directory importable (e.g., mention adding the scripts folder to sys.path or using a package-style import), and include a short one-line instruction that callers can add before the import to ensure parse_prd.py is discoverable by Python; reference PRDParser and TodoWrite in the note so readers know where to apply the path fix.003-skills/.claude/skills/nixtla-usage-optimizer/SKILL.md (1)
81-114: Inconsistent example extraction pattern.This skill retains inline examples while other reviewed skills extract them to
references/examples.md. This creates an inconsistent documentation pattern across the skill library.Recommendation: Align with other skills by either extracting examples or documenting the decision to keep them inline.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-usage-optimizer/SKILL.md` around lines 81 - 114, The "## Examples" section in SKILL.md (containing "Example 1: Audit Existing Project" and "Example 2: No TimeGPT Yet") is inconsistent with other skills that extract examples to references/examples.md; either move these example blocks into a new file references/examples.md (preserving the text and headings) and replace the inline section with a short pointer to that file, or add a brief note in SKILL.md explaining why examples remain inline; update the "## Examples" header or add a "See references/examples.md" link to reflect the chosen approach so the repository-wide pattern is consistent.003-skills/.claude/skills/nixtla-timegpt-lab/SKILL.md (1)
110-133: Inconsistent example extraction pattern.This skill retains inline examples (Example 1 and Example 2), while other reviewed skills extract examples to
references/examples.md. The PR description states that long sections including Examples were extracted to ~45 new files under references/ to keep SKILL.md files under 150 lines.This file is 143 lines, so it's under the 150-line target, but the inconsistent approach may create maintenance challenges.
Recommendation: For consistency, consider either:
- Extracting examples to
references/examples.mdlike other skills- Documenting why this skill keeps inline examples (e.g., examples are brief enough)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@003-skills/.claude/skills/nixtla-timegpt-lab/SKILL.md` around lines 110 - 133, The Examples section currently keeps two inline examples ("Example 1: StatsForecast Baselines" and "Example 2: TimeGPT with Confidence Intervals") inside SKILL.md which is inconsistent with other skills that move examples to references/examples.md; either extract these code examples into references/examples.md (creating a brief examples file and replacing the inline blocks with a short pointer) or add a short note in SKILL.md explaining why these examples remain inline (e.g., "kept inline because they are brief"), and update the Examples heading accordingly to maintain consistency across skills.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: aff27596-efd9-4d1f-9f34-4adeba331ca8
📒 Files selected for processing (82)
003-skills/.claude/skills/nixtla-anomaly-detector/SKILL.md003-skills/.claude/skills/nixtla-anomaly-detector/references/examples.md003-skills/.claude/skills/nixtla-arbitrage-detector/SKILL.md003-skills/.claude/skills/nixtla-arbitrage-detector/references/examples.md003-skills/.claude/skills/nixtla-batch-forecaster/SKILL.md003-skills/.claude/skills/nixtla-batch-forecaster/references/examples.md003-skills/.claude/skills/nixtla-benchmark-reporter/SKILL.md003-skills/.claude/skills/nixtla-benchmark-reporter/references/error-handling.md003-skills/.claude/skills/nixtla-benchmark-reporter/references/examples.md003-skills/.claude/skills/nixtla-benchmark-reporter/references/output-formats.md003-skills/.claude/skills/nixtla-contract-schema-mapper/SKILL.md003-skills/.claude/skills/nixtla-contract-schema-mapper/references/examples.md003-skills/.claude/skills/nixtla-correlation-mapper/SKILL.md003-skills/.claude/skills/nixtla-correlation-mapper/references/error-handling.md003-skills/.claude/skills/nixtla-correlation-mapper/references/examples.md003-skills/.claude/skills/nixtla-correlation-mapper/references/outputs.md003-skills/.claude/skills/nixtla-cross-validator/SKILL.md003-skills/.claude/skills/nixtla-cross-validator/references/examples.md003-skills/.claude/skills/nixtla-demo-generator/SKILL.md003-skills/.claude/skills/nixtla-demo-generator/references/examples.md003-skills/.claude/skills/nixtla-event-impact-modeler/SKILL.md003-skills/.claude/skills/nixtla-event-impact-modeler/references/error-handling.md003-skills/.claude/skills/nixtla-event-impact-modeler/references/examples.md003-skills/.claude/skills/nixtla-event-impact-modeler/references/outputs.md003-skills/.claude/skills/nixtla-exogenous-integrator/SKILL.md003-skills/.claude/skills/nixtla-exogenous-integrator/references/examples.md003-skills/.claude/skills/nixtla-experiment-architect/SKILL.md003-skills/.claude/skills/nixtla-experiment-architect/references/examples.md003-skills/.claude/skills/nixtla-forecast-validator/SKILL.md003-skills/.claude/skills/nixtla-forecast-validator/references/examples.md003-skills/.claude/skills/nixtla-liquidity-forecaster/SKILL.md003-skills/.claude/skills/nixtla-liquidity-forecaster/references/error-handling.md003-skills/.claude/skills/nixtla-liquidity-forecaster/references/examples.md003-skills/.claude/skills/nixtla-market-risk-analyzer/SKILL.md003-skills/.claude/skills/nixtla-market-risk-analyzer/references/error-handling.md003-skills/.claude/skills/nixtla-market-risk-analyzer/references/examples.md003-skills/.claude/skills/nixtla-market-risk-analyzer/references/resources.md003-skills/.claude/skills/nixtla-mcp-server-builder/SKILL.md003-skills/.claude/skills/nixtla-mcp-server-builder/references/examples.md003-skills/.claude/skills/nixtla-mcp-server-builder/references/server-template.md003-skills/.claude/skills/nixtla-model-selector/SKILL.md003-skills/.claude/skills/nixtla-model-selector/references/examples.md003-skills/.claude/skills/nixtla-plugin-scaffolder/SKILL.md003-skills/.claude/skills/nixtla-plugin-scaffolder/references/examples.md003-skills/.claude/skills/nixtla-polymarket-analyst/SKILL.md003-skills/.claude/skills/nixtla-polymarket-analyst/references/examples.md003-skills/.claude/skills/nixtla-prd-to-code/SKILL.md003-skills/.claude/skills/nixtla-prd-to-code/references/examples.md003-skills/.claude/skills/nixtla-prod-pipeline-generator/SKILL.md003-skills/.claude/skills/nixtla-prod-pipeline-generator/references/examples.md003-skills/.claude/skills/nixtla-schema-mapper/SKILL.md003-skills/.claude/skills/nixtla-schema-mapper/references/examples.md003-skills/.claude/skills/nixtla-skills-bootstrap/SKILL.md003-skills/.claude/skills/nixtla-skills-bootstrap/references/examples.md003-skills/.claude/skills/nixtla-skills-index/SKILL.md003-skills/.claude/skills/nixtla-skills-index/references/examples.md003-skills/.claude/skills/nixtla-test-generator/SKILL.md003-skills/.claude/skills/nixtla-test-generator/references/examples.md003-skills/.claude/skills/nixtla-test-generator/references/test-patterns.md003-skills/.claude/skills/nixtla-timegpt-finetune-lab/SKILL.md003-skills/.claude/skills/nixtla-timegpt-finetune-lab/references/examples.md003-skills/.claude/skills/nixtla-timegpt-lab/SKILL.md003-skills/.claude/skills/nixtla-timegpt2-migrator/SKILL.md003-skills/.claude/skills/nixtla-timegpt2-migrator/references/examples.md003-skills/.claude/skills/nixtla-uncertainty-quantifier/SKILL.md003-skills/.claude/skills/nixtla-uncertainty-quantifier/references/examples.md003-skills/.claude/skills/nixtla-universal-validator/SKILL.md003-skills/.claude/skills/nixtla-usage-optimizer/SKILL.md005-plugins/nixtla-baseline-lab/skills/nixtla-baseline-review/SKILL.md005-plugins/nixtla-baseline-lab/skills/nixtla-baseline-review/resources/examples.md005-plugins/nixtla-forecast-explainer/skills/nixtla-explain-analyst/SKILL.md005-plugins/nixtla-search-to-slack/skills/nixtla-model-benchmarker/SKILL.md005-plugins/nixtla-search-to-slack/skills/nixtla-model-benchmarker/references/code-generation.md005-plugins/nixtla-search-to-slack/skills/nixtla-model-benchmarker/references/models.md005-plugins/nixtla-search-to-slack/skills/nixtla-model-benchmarker/references/scenarios.md005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/SKILL.md005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/examples.md005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/search-strategy.md005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/slack-integration.md005-plugins/nixtla-search-to-slack/skills/timegpt-pipeline-builder/SKILL.md005-plugins/nixtla-search-to-slack/skills/timegpt-pipeline-builder/references/code-snippets.md005-plugins/nixtla-search-to-slack/skills/timegpt-pipeline-builder/references/requirements-gathering.md
| python {baseDir}/scripts/generate_demo_notebook.py \ | ||
| --library timegpt \ | ||
| --dataset custom \ | ||
| --api-key $NIXTLA_API_KEY \ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for environment variable references in demo generator skill and scripts
rg -n 'NIXTLA.*API.*KEY' --type=md --type=py -g 'nixtla-demo-generator/**'Repository: intent-solutions-io/plugins-nixtla
Length of output: 226
🏁 Script executed:
#!/bin/bash
# Find the nixtla-demo-generator directory and list its structure
find . -type d -name "*nixtla*" | head -20Repository: intent-solutions-io/plugins-nixtla
Length of output: 1478
🏁 Script executed:
#!/bin/bash
# Look for the examples.md file mentioned in the review
find . -name "examples.md" -path "*nixtla*" -type fRepository: intent-solutions-io/plugins-nixtla
Length of output: 2319
🏁 Script executed:
#!/bin/bash
# Search for NIXTLA_API_KEY and NIXTLA_TIMEGPT_API_KEY across the repo
rg 'NIXTLA_API_KEY|NIXTLA_TIMEGPT_API_KEY' -n --type md --type py --type txtRepository: intent-solutions-io/plugins-nixtla
Length of output: 50390
🏁 Script executed:
#!/bin/bash
# Find nixtla-demo-generator SKILL.md
find . -path "*/nixtla-demo-generator/*" -name "SKILL.md" -o -path "*/nixtla-demo-generator/*" -name "*.md" | head -20Repository: intent-solutions-io/plugins-nixtla
Length of output: 286
🏁 Script executed:
#!/bin/bash
# Check the examples.md file around line 70
sed -n '60,80p' ./003-skills/.claude/skills/nixtla-demo-generator/references/examples.mdRepository: intent-solutions-io/plugins-nixtla
Length of output: 642
🏁 Script executed:
#!/bin/bash
# Search for API key references specifically in nixtla-demo-generator
find ./003-skills/.claude/skills/nixtla-demo-generator -type f \( -name "*.md" -o -name "*.py" \) | xargs grep -l 'API_KEY\|api.key\|api.secret' 2>/dev/null | head -10Repository: intent-solutions-io/plugins-nixtla
Length of output: 293
🏁 Script executed:
#!/bin/bash
# Read the SKILL.md for nixtla-demo-generator to see documented environment variables
cat ./003-skills/.claude/skills/nixtla-demo-generator/SKILL.md | head -150Repository: intent-solutions-io/plugins-nixtla
Length of output: 4889
🏁 Script executed:
#!/bin/bash
# Check the script that actually uses the API key
grep -n 'NIXTLA.*API.*KEY\|api.key\|api_key' ./003-skills/.claude/skills/nixtla-demo-generator/scripts/generate_demo_notebook.py | head -20Repository: intent-solutions-io/plugins-nixtla
Length of output: 225
🏁 Script executed:
#!/bin/bash
# Verify the exact context and usage in generate_demo_notebook.py
sed -n '310,320p' ./003-skills/.claude/skills/nixtla-demo-generator/scripts/generate_demo_notebook.pyRepository: intent-solutions-io/plugins-nixtla
Length of output: 545
Use NIXTLA_TIMEGPT_API_KEY to match the codebase standard.
Line 70 uses $NIXTLA_API_KEY, but TimeGPT across the entire codebase (CLAUDE.md, README.md, all skills, and plugins like baseline-lab) consistently uses NIXTLA_TIMEGPT_API_KEY. Update the example, SKILL.md documentation, and the generate_demo_notebook.py script to use the standard environment variable name for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@003-skills/.claude/skills/nixtla-demo-generator/references/examples.md` at
line 70, Replace the non-standard env var usage $NIXTLA_API_KEY with
$NIXTLA_TIMEGPT_API_KEY everywhere for consistency: update the example command
line (--api-key $NIXTLA_API_KEY) in examples.md, the SKILL.md documentation
references, and the generate_demo_notebook.py script to read/use
NIXTLA_TIMEGPT_API_KEY (e.g., in any os.environ lookups or argparse defaults);
ensure the flag name (--api-key) remains the same but the environment variable
value and docs are updated to the new variable name.
| compatible-with: claude-code | ||
| tags: [nixtla, time-series, forecasting, causal-impact, event-analysis] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if compatible-with is documented as a valid frontmatter field in the validator schema
# Search for validator schema documentation
fd -e md -e py -e json -e yaml validator schema | head -20
# Check validator script for accepted frontmatter fields
rg -n -A5 -B5 "compatible-with|frontmatter.*field" --type pyRepository: intent-solutions-io/plugins-nixtla
Length of output: 5224
🏁 Script executed:
#!/bin/bash
# Extract the field definitions from the validator script
echo "=== FIELD DEFINITIONS IN VALIDATOR ==="
sed -n '1,200p' 004-scripts/validate_skills_v2.py | grep -A20 "REQUIRED_FIELDS\|OPTIONAL_FIELDS\|DEPRECATED_FIELDS"
echo ""
echo "=== CHECK IF compatible-with IS DEFINED IN VALIDATOR ==="
rg -n "compatible-with" 004-scripts/validate_skills_v2.py
echo ""
echo "=== CHECK FULL FIELD DEFINITIONS ==="
sed -n '1,150p' 004-scripts/validate_skills_v2.py | head -100Repository: intent-solutions-io/plugins-nixtla
Length of output: 4221
🏁 Script executed:
#!/bin/bash
# Check if compatible-with appears in the skill files mentioned in the review
echo "=== SEARCHING FOR compatible-with IN SKILL FILES ==="
rg -n "compatible-with" 003-skills/
echo ""
echo "=== CHECK NIXTLA SKILL FILES FOR FRONTMATTER ==="
for skill in "nixtla-event-impact-modeler" "nixtla-uncertainty-quantifier" "nixtla-timegpt2-migrator" "nixtla-experiment-architect" "nixtla-model-selector" "nixtla-plugin-scaffolder" "nixtla-polymarket-analyst" "nixtla-benchmark-reporter"; do
if [ -f "003-skills/.claude/skills/$skill/SKILL.md" ]; then
echo "--- $skill ---"
head -15 "003-skills/.claude/skills/$skill/SKILL.md"
fi
doneRepository: intent-solutions-io/plugins-nixtla
Length of output: 5863
🏁 Script executed:
#!/bin/bash
echo "=== CHECK GITHUB ACTIONS WORKFLOW ==="
fd -e yml -e yaml .github/workflows/ | xargs cat 2>/dev/null | head -100
echo ""
echo "=== CHECK IF VALIDATOR WARNINGS BLOCK THE BUILD ==="
grep -r "error\|fail\|exit" 004-scripts/validate_skills_v2.py | grep -i "warning\|threshold"
echo ""
echo "=== CHECK VALIDATOR EXIT CODE LOGIC ==="
tail -50 004-scripts/validate_skills_v2.py | grep -A10 "sys.exit\|return"
echo ""
echo "=== CHECK IF compatible-with IS MENTIONED IN ANY CONFIG/DOCS ==="
rg -n "compatible-with" --type md --type py --type yaml --type json -B2 -A2 | head -50Repository: intent-solutions-io/plugins-nixtla
Length of output: 4668
🏁 Script executed:
#!/bin/bash
echo "=== FIND GITHUB ACTIONS WORKFLOW FILES ==="
find . -path './.git' -prune -o -name "*.yml" -o -name "*.yaml" | grep -i github | head -20
echo ""
echo "=== CHECK IF compatible-with WAS RECENTLY ADDED OR CONFIGURED ==="
git log --oneline --all -n 20 2>/dev/null | head -20
echo ""
echo "=== SEARCH FOR DISCUSSIONS ABOUT ADDING compatible-with FIELD ==="
rg -n "compatible-with" --type md README* CHANGELOG* 000-docs/ 2>/dev/null | head -30
echo ""
echo "=== CHECK VALIDATOR SCRIPT FOR FUTURE PLANS OR COMMENTS ABOUT compatible-with ==="
grep -i "claude-code\|compatible" 004-scripts/validate_skills_v2.pyRepository: intent-solutions-io/plugins-nixtla
Length of output: 2551
🏁 Script executed:
#!/bin/bash
echo "=== CHECK skills-validation.yml WORKFLOW ==="
cat .github/workflows/skills-validation.yml
echo ""
echo "=== CHECK plugin-validator.yml WORKFLOW ==="
cat .github/workflows/plugin-validator.ymlRepository: intent-solutions-io/plugins-nixtla
Length of output: 2638
🏁 Script executed:
#!/bin/bash
echo "=== CHECK RECENT COMMIT THAT ADDED compatible-with ==="
git show --stat 712f215 2>/dev/null | head -50
echo ""
echo "=== CHECK IF compatible-with CAUSES VALIDATION FAILURE WITH --fail-on-warn ==="
echo "From the validator logic:"
echo "- Line 409: warnings.append(f\"[frontmatter] Non-standard field: '{field}'\")"
echo "- Line after grep: elif total_warnings > 0 and args.fail_on_warn:"
echo " print(f\"❌ Validation FAILED due to {total_warnings} warning(s)\")"
echo " return 1"
echo ""
echo "CI uses: python 004-scripts/validate_skills_v2.py --fail-on-warn"
echo "Therefore: compatible-with warnings WILL cause CI to fail"Repository: intent-solutions-io/plugins-nixtla
Length of output: 3290
Add compatible-with to OPTIONAL_FIELDS in validator schema to unblock build.
The compatible-with: claude-code field intentionally added in commit 712f215 triggers non-standard field warnings that fail the build when CI runs with --fail-on-warn. The validator treats it as unknown because it's not defined in OPTIONAL_FIELDS (line 409 in 004-scripts/validate_skills_v2.py).
Update the validator to include compatible-with in the optional fields set:
OPTIONAL_FIELDS = {"model", "disable-model-invocation", "mode", "tags", "metadata", "compatible-with"}The field is intentional per the commit message ("feat: upgrade all 35 skills to A grade (90+) on enterprise validator") and correctly appears across all 8 skill files for Claude Code compatibility metadata. The validator schema needs to recognize it as valid.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@003-skills/.claude/skills/nixtla-event-impact-modeler/SKILL.md` around lines
7 - 8, The validator is rejecting the intentionally added "compatible-with:
claude-code" field because OPTIONAL_FIELDS in validate_skills_v2.py doesn't
include it; update the OPTIONAL_FIELDS set (symbol: OPTIONAL_FIELDS) to add
"compatible-with" (e.g. OPTIONAL_FIELDS = {"model", "disable-model-invocation",
"mode", "tags", "metadata", "compatible-with"}) so the validator treats this
field as optional and the CI build no longer fails on the non-standard-field
warning.
| # VaR (95%): -2.15% | ||
| # Max Drawdown: -35.2% | ||
| # Sharpe Ratio: 0.95 | ||
| # Recommended Position: $45,000 (45% of account) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the resources.md file for position sizing guidelines
rg -n -C3 "risk-per-trade|position size|Best Practices" 003-skills/.claude/skills/nixtla-market-risk-analyzer/references/Repository: intent-solutions-io/plugins-nixtla
Length of output: 4780
🏁 Script executed:
cat -n 003-skills/.claude/skills/nixtla-market-risk-analyzer/references/examples.md | head -50Repository: intent-solutions-io/plugins-nixtla
Length of output: 1859
🏁 Script executed:
find 003-skills/.claude/skills/nixtla-market-risk-analyzer -name "position_sizing.py" -type fRepository: intent-solutions-io/plugins-nixtla
Length of output: 158
🏁 Script executed:
cat -n 003-skills/.claude/skills/nixtla-market-risk-analyzer/scripts/position_sizing.py | head -100Repository: intent-solutions-io/plugins-nixtla
Length of output: 3936
🏁 Script executed:
cat -n 003-skills/.claude/skills/nixtla-market-risk-analyzer/scripts/position_sizing.py | tail -150Repository: intent-solutions-io/plugins-nixtla
Length of output: 6136
Remove or clarify the Example 1 expected output as it contradicts documented best practices and the behavior shown in Example 2.
The example recommends $45,000 (45% of account) for a single position, but this conflicts with the skill's own design and documentation:
-
Script design uses conservative multi-method approach: The position_sizing.py script calculates positions via fixed-fractional (2% risk default), volatility-adjusted, and VaR-based methods, then selects the minimum—ensuring conservative sizing.
-
Contradicts Example 2: Example 2 with the same account size and HIGHER asset volatility (45% vs 28%) recommends only $12,000 (24%) because it explicitly sets
--risk-per-trade 0.01. This shows that without explicit conservative parameters, Example 1's default 2% risk-per-trade should produce a much lower allocation than 45%. -
Misaligned with documented best practices: resources.md emphasizes conservative approaches like Half Kelly and combining multiple sizing methods to choose the most restrictive—yet Example 1's expected output appears to violate this principle.
The expected output is either outdated or incorrectly calculated. Either correct it to reflect the script's actual behavior with the given parameters, or add parameters (like --risk-per-trade 0.01) to make the output align with conservative risk management standards.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@003-skills/.claude/skills/nixtla-market-risk-analyzer/references/examples.md`
at line 24, The Example 1 expected output is inconsistent with the position
sizing logic; update Example 1 in examples.md to either remove or correct the
"$45,000 (45% of account)" line so it matches the actual behavior of
position_sizing.py (which takes the min of fixed-fractional, volatility-adjusted
and VaR methods and defaults to 2% risk-per-trade), or explicitly add
conservative parameters (e.g., include --risk-per-trade 0.01 or other flags used
in Example 2) so the printed recommendation reflects the conservative sizing
described in resources.md and produced by the position_sizing.py calculations.
| version: "1.0.0" | ||
| author: "Jeremy Longshore <jeremy@intentsolutions.io>" | ||
| license: MIT | ||
| compatible-with: claude-code |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the validator schema or documentation defines the correct field name
rg -n "compatible" --type=md --type=yaml --type=json -g '!SKILL.md' -C3Repository: intent-solutions-io/plugins-nixtla
Length of output: 50393
🏁 Script executed:
# First, locate and read the specific SKILL.md file mentioned in the review
find . -path "*nixtla-market-risk-analyzer*" -name "SKILL.md" -type fRepository: intent-solutions-io/plugins-nixtla
Length of output: 229
🏁 Script executed:
# Search for all SKILL.md files to compare their frontmatter structure
find . -name "SKILL.md" -type f | head -20Repository: intent-solutions-io/plugins-nixtla
Length of output: 1678
🏁 Script executed:
# Search for validator schema or documentation that defines valid SKILL.md fields
rg -l "validator|schema|frontmatter" -g "*.md" -g "*.json" -g "*.yaml" | grep -i "skill\|valid\|config" | head -10Repository: intent-solutions-io/plugins-nixtla
Length of output: 1240
🏁 Script executed:
# Read the target SKILL.md file (lines 1-50 to see frontmatter)
head -50 "./003-skills/.claude/skills/nixtla-market-risk-analyzer/SKILL.md"Repository: intent-solutions-io/plugins-nixtla
Length of output: 2146
🏁 Script executed:
# Read another SKILL.md file for comparison (e.g., nixtla-baseline-review)
head -50 "./.claude/skills/nixtla-baseline-review/SKILL.md"Repository: intent-solutions-io/plugins-nixtla
Length of output: 1694
🏁 Script executed:
# Search for validator configuration or schema files
find . -type f \( -name "*validator*" -o -name "*schema*" \) -path "*/tests/*" -o -path "*/.claude/*" | grep -E "\.(json|yaml|md)$" | head -10Repository: intent-solutions-io/plugins-nixtla
Length of output: 862
🏁 Script executed:
# Read the skill standard/schema file to check what fields are valid
cat "./.claude/skills/skills-expert/references/skill-standard.md"Repository: intent-solutions-io/plugins-nixtla
Length of output: 4329
🏁 Script executed:
# Also check the .claude/settings.json for any schema definitions
cat "./.claude/settings.json" | head -100Repository: intent-solutions-io/plugins-nixtla
Length of output: 608
🏁 Script executed:
# Search for "compatible-with" usage across all SKILL.md files
rg "compatible-with" --type=mdRepository: intent-solutions-io/plugins-nixtla
Length of output: 599
🏁 Script executed:
# Search for the master spec file mentioned in skill-standard.md
cat "000-docs/098-SPEC-MASTER-claude-skills-standard.md" | head -100Repository: intent-solutions-io/plugins-nixtla
Length of output: 160
🏁 Script executed:
# Count total usage of compatible-with across all SKILL.md files
rg "compatible-with" --type=md -c | wc -lRepository: intent-solutions-io/plugins-nixtla
Length of output: 79
🏁 Script executed:
# Check if there's any validation script that references this field
cat "scripts/validate_skills.py" | grep -A5 -B5 "compatible" || echo "No matches or file not found"Repository: intent-solutions-io/plugins-nixtla
Length of output: 182
🏁 Script executed:
# Look for any validator configuration or schema in the docs
find . -path "*000-docs*" -name "*.md" | xargs grep -l "frontmatter\|schema\|validator" | head -10Repository: intent-solutions-io/plugins-nixtla
Length of output: 865
🏁 Script executed:
# Check what other frontmatter fields are used in all SKILL.md files
rg "^[a-z-]+:" "./003-skills/.claude/skills/nixtla-market-risk-analyzer/SKILL.md" --no-heading | head -20Repository: intent-solutions-io/plugins-nixtla
Length of output: 625
🏁 Script executed:
# Compare frontmatter across multiple SKILL.md files
for file in $(find . -name "SKILL.md" -type f | head -5); do
echo "=== $file ==="
head -15 "$file" | grep -E "^[a-z-]+:" || true
doneRepository: intent-solutions-io/plugins-nixtla
Length of output: 1389
Remove non-standard frontmatter field compatible-with.
Per the official skill schema (./.claude/skills/skills-expert/references/skill-standard.md), compatible-with is not a documented frontmatter field. The schema defines optional fields as: allowed-tools, model, version, license, mode, and disable-model-invocation. This field will likely cause validation failures when assessing against the A-grade criteria. Remove line 7 or replace it with a schema-compliant alternative if the intended metadata belongs elsewhere.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@003-skills/.claude/skills/nixtla-market-risk-analyzer/SKILL.md` at line 7,
Remove the non-standard frontmatter field "compatible-with" from SKILL.md (the
line currently showing compatible-with: claude-code) because it isn't part of
the official skill schema; either delete that line or replace it with a
schema-compliant field such as one of the allowed optional fields
(allowed-tools, model, version, license, mode, or disable-model-invocation) if
you intended to convey similar metadata.
| compatible-with: claude-code | ||
| tags: [nixtla, time-series, skills-management, discovery, documentation] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read-only verification: find schema/validator rules and all usages of `compatible-with`
# Expected: either schema explicitly allows it, or we identify the canonical replacement key.
set -euo pipefail
echo "== Locate validator/schema docs in repo =="
fd -H "schema|validator|skills" . | sed -n '1,120p'
echo
echo "== Search for frontmatter key usage =="
rg -n --hidden -C2 '^compatible-with:' .
echo
echo "== Search for allowed frontmatter keys in docs/config =="
rg -n --hidden -C3 'frontmatter|allowed.*field|schema|compatible-with|tags' .Repository: intent-solutions-io/plugins-nixtla
Length of output: 50394
🏁 Script executed:
cat -n "./000-docs/6767-m-DR-STND-claude-skills-frontmatter-schema.md" | head -200Repository: intent-solutions-io/plugins-nixtla
Length of output: 7867
Remove compatible-with or replace with officially supported metadata.
The canonical frontmatter schema (6767-m-DR-STND-claude-skills-frontmatter-schema.md) does not include compatible-with as an official field. This field is used in 30+ skill files across the codebase but conflicts with the enforced schema specification. Either remove this field or add it to the official schema if it represents required metadata.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@003-skills/.claude/skills/nixtla-skills-index/SKILL.md` around lines 8 - 9,
Remove the unsupported frontmatter key "compatible-with" from the SKILL.md
frontmatter (or replace it with an officially supported field from the canonical
frontmatter schema); specifically, edit the frontmatter block in the SKILL.md
for the nixtla-skills-index and delete the line "compatible-with: claude-code"
(or map its semantics to a supported tag/field in the canonical schema) so the
file conforms to the enforced frontmatter standard.
| version: "1.0.1" | ||
| author: "Jeremy Longshore <jeremy@intentsolutions.io>" | ||
| license: MIT | ||
| compatible-with: claude-code |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read-only verification: locate frontmatter schema/allowlist and warning logic.
set -euo pipefail
echo "== Find validator files =="
fd -H "validate*skills*.py|validator|frontmatter|schema"
echo
echo "== Find where 'compatible-with' and allowed frontmatter keys are handled =="
rg -n -C3 "compatible-with|frontmatter|allowed.*field|schema|unknown field|non-standard"
echo
echo "== Find strict fail-on-warn behavior =="
rg -n -C3 "fail-on-warn|warn|warning|strict"Repository: intent-solutions-io/plugins-nixtla
Length of output: 50394
🏁 Script executed:
cat -n 003-skills/.claude/skills/nixtla-universal-validator/SKILL.md | head -20Repository: intent-solutions-io/plugins-nixtla
Length of output: 907
🏁 Script executed:
# Search for validator code that handles non-standard fields or fail-on-warn
rg -n "unknown.*field|non.standard|compatible.with" --type py -A 3 -B 3Repository: intent-solutions-io/plugins-nixtla
Length of output: 669
🏁 Script executed:
# Check if there's a validation schema or allowlist
rg -n "ALLOWED|allowed.*field|required.*field|frontmatter.*schema" --type py -A 2 -B 2Repository: intent-solutions-io/plugins-nixtla
Length of output: 17895
🏁 Script executed:
# Check for fail-on-warn behavior and how warnings are handled in the validator
rg -n "fail.*warn|strict|exit.*warn|warnings.*fail" --type py 004-scripts/ -A 3 -B 3Repository: intent-solutions-io/plugins-nixtla
Length of output: 6397
🏁 Script executed:
# Check CI/CD workflows for fail-on-warn or strict validation gates
rg -n "fail.*warn|strict|--strict" .github/workflows/ -A 2 -B 2Repository: intent-solutions-io/plugins-nixtla
Length of output: 646
🏁 Script executed:
# Check if the validator has directory patterns or if all skills in 003-skills are validated
rg -n "003-skills|nixtla-universal-validator" .github/workflows/ -A 2 -B 2Repository: intent-solutions-io/plugins-nixtla
Length of output: 344
🏁 Script executed:
# Check validate_skills_v2.py for which directories it validates
rg -n "003-skills|\.claude/skills" 004-scripts/validate_skills_v2.py -A 2 -B 2Repository: intent-solutions-io/plugins-nixtla
Length of output: 393
🏁 Script executed:
# Confirm the skill directory structure and that this is indeed in the validation path
fd "nixtla-universal-validator" 003-skills/Repository: intent-solutions-io/plugins-nixtla
Length of output: 60
🏁 Script executed:
# Find the exact location and confirm it exists
find 003-skills -name "*nixtla-universal-validator*" -type dRepository: intent-solutions-io/plugins-nixtla
Length of output: 130
🏁 Script executed:
# If it exists, check if the SKILL.md will be validated by the CI script logic
cat -n 004-scripts/validate_skills_v2.py | sed -n '130,160p'Repository: intent-solutions-io/plugins-nixtla
Length of output: 1650
Remove or register the compatible-with field.
The compatible-with: claude-code frontmatter key at line 8 is flagged as non-standard by the validator (not in the Anthropic or enterprise schema). When CI runs with --fail-on-warn, this warning becomes a validation failure and blocks the build. Either add this field to the validator's allowlist or remove it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@003-skills/.claude/skills/nixtla-universal-validator/SKILL.md` at line 8,
Remove or register the non-standard frontmatter key "compatible-with:
claude-code" in SKILL.md; specifically either delete the "compatible-with" line
from the SKILL.md frontmatter or add "compatible-with" (with allowed values such
as "claude-code") to the validator's allowlist/config so the schema recognizes
it and CI no longer fails on warnings.
| compatible-with: claude-code | ||
| tags: [nixtla, forecasting, benchmarking, time-series, statsforecast] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect all skill frontmatter occurrences and validator docs/config references.
# Expected result: either documented support for `compatible-with` or a canonical replacement key.
rg -n -C2 '^compatible-with:' --type=md
rg -n -C3 'compatible-with|frontmatter|schema|validator' --type=md
fd -HI 'validator|schema|skills' . | sed -n '1,120p'Repository: intent-solutions-io/plugins-nixtla
Length of output: 50394
🏁 Script executed:
# Find and read the schema document referenced in output
find . -type f -name "*6767*schema*" -o -name "*skills*schema*" | head -20Repository: intent-solutions-io/plugins-nixtla
Length of output: 169
🏁 Script executed:
# Look for the schema file directly
fd -t f "6767.*schema" . --max-one-resultRepository: intent-solutions-io/plugins-nixtla
Length of output: 311
🏁 Script executed:
# Search for documentation about compatible-with in skill docs
rg -i "compatible-with" --type md -B2 -A2 | head -100Repository: intent-solutions-io/plugins-nixtla
Length of output: 2828
🏁 Script executed:
# Read the schema file to check supported fields
cat -n "000-docs/6767-m-DR-STND-claude-skills-frontmatter-schema.md" | head -100Repository: intent-solutions-io/plugins-nixtla
Length of output: 4107
🏁 Script executed:
# Continue reading the schema file to find compatible-with
cat -n "000-docs/6767-m-DR-STND-claude-skills-frontmatter-schema.md" | head -200 | tail -100Repository: intent-solutions-io/plugins-nixtla
Length of output: 3839
Remove compatible-with field—it is not in the official Claude Skills schema.
The canonical schema (000-docs/6767-m-DR-STND-claude-skills-frontmatter-schema.md) lists all supported frontmatter fields, and compatible-with is not among them. While tests pass (validators are lenient with unknown fields), this field should be removed to maintain compliance with the official standard. The supported optional fields are: allowed-tools, model, version, license, mode, and disable-model-invocation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@005-plugins/nixtla-baseline-lab/skills/nixtla-baseline-review/SKILL.md`
around lines 11 - 12, Remove the unsupported frontmatter field `compatible-with`
from SKILL.md (the top YAML block) so the file conforms to the canonical Claude
Skills schema; keep only supported frontmatter keys (e.g., allowed-tools, model,
version, license, mode, disable-model-invocation) and validate the remaining
YAML is well-formed while preserving other tags like `tags: [nixtla,
forecasting, benchmarking, time-series, statsforecast]`.
| version: 1.1.0 | ||
| author: Jeremy Longshore <jeremy@intentsolutions.io> | ||
| license: MIT | ||
| compatible-with: claude-code |
There was a problem hiding this comment.
Resolve non-standard frontmatter key before merge.
Line 16 (compatible-with) is triggering the strict validator warning. Please either add this key to the validator schema or move compatibility signaling to a supported field to avoid downstream parsing inconsistencies.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@005-plugins/nixtla-forecast-explainer/skills/nixtla-explain-analyst/SKILL.md`
at line 16, The frontmatter contains a non-standard key "compatible-with"
causing validator warnings; fix by removing or relocating that key into a
supported field (e.g., add to an existing standard frontmatter property) or
update the validator schema to accept "compatible-with"; ensure changes
reference the SKILL.md frontmatter and the "compatible-with" key so
parsers/validators no longer emit errors.
| ## 1. TimeGPT API v2.0 Released (Nov 20, 2025) | ||
| **Source**: GitHub Release | **Relevance**: High | ||
|
|
||
| ### Summary | ||
| TimeGPT API v2.0 introduces multivariate forecasting support, improved accuracy on | ||
| irregular time series, and a new Python SDK with async/await capabilities. | ||
|
|
||
| ### Key Points | ||
| - Multivariate support: Handle up to 100 related time series simultaneously | ||
| - 15% accuracy improvement on M5 competition benchmarks | ||
| - New async Python client for better performance | ||
| - Breaking change: Old SDK deprecated, migration guide available | ||
|
|
||
| ### Why This Matters | ||
| Enables enterprise use cases requiring multi-series forecasting without building | ||
| custom solutions. The async client significantly speeds up batch predictions. | ||
|
|
||
| [View Release Notes](https://github.com/Nixtla/nixtla/releases/tag/v2.0.0) |
There was a problem hiding this comment.
Avoid presenting unverified product claims as canonical example output.
Lines 14–31 include specific release date/performance claims that may age or be inaccurate, but the example reads as authoritative. Prefer placeholders (or clearly mark as fictional sample data) to prevent documentation drift and misinformation.
Suggested doc-safe rewrite
-## 1. TimeGPT API v2.0 Released (Nov 20, 2025)
-**Source**: GitHub Release | **Relevance**: High
+## 1. [Example] TimeGPT API update
+**Source**: [Official release link] | **Relevance**: High
### Summary
-TimeGPT API v2.0 introduces multivariate forecasting support, improved accuracy on
-irregular time series, and a new Python SDK with async/await capabilities.
+[2-3 sentence factual summary based on the retrieved source]
...
-[View Release Notes](https://github.com/Nixtla/nixtla/releases/tag/v2.0.0)
+[View Source](https://github.com/Nixtla/nixtla/releases)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@005-plugins/nixtla-search-to-slack/skills/nixtla-research-assistant/references/examples.md`
around lines 14 - 31, The example entry titled "1. TimeGPT API v2.0 Released
(Nov 20, 2025)" contains specific dates and performance claims presented as
factual; replace those concrete claims with neutral placeholders or an explicit
"fictional/example" disclaimer and remove any specific metrics (e.g., "15%
accuracy improvement", "Handle up to 100 related time series") so the example
reads as a template rather than an authoritative claim; update the heading and
Summary/Key Points to use non-assertive language like "Example: TimeGPT API v2.0
(date/metrics illustrative only)" to prevent documentation drift.
Summary
Verification
All 35 skills pass enterprise validation at A grade (90+):
Test plan
python3 scripts/validate-skills-schema.py --enterprise --verboseon all skills🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
Documentation
examples.md,error-handling.md,outputs.md) across all Nixtla skills with detailed usage walkthroughs and troubleshooting steps.Enhancements