Skip to content

Add llm-claude-code plugin for Claude Code CLI#1

Merged
btucker merged 11 commits intomainfrom
claude/llm-plugin-claude-cli-RVNoQ
Jan 2, 2026
Merged

Add llm-claude-code plugin for Claude Code CLI#1
btucker merged 11 commits intomainfrom
claude/llm-plugin-claude-cli-RVNoQ

Conversation

@btucker
Copy link
Copy Markdown
Owner

@btucker btucker commented Jan 2, 2026

Implement an LLM model plugin that uses the Claude Code CLI (claude)
instead of the Anthropic API SDK. This allows users to leverage their
existing Claude Code subscription without requiring a separate API key.

Features:

  • Support for multiple Claude models (opus, sonnet, haiku)
  • Streaming via --output-format stream-json
  • Non-streaming via --output-format json
  • Conversation context handling
  • Configurable options (max_tokens, timeout, system_prompt, etc.)
  • Model aliases for convenience (cc, opus, sonnet, haiku)

claude added 11 commits January 1, 2026 23:35
Implement an LLM model plugin that uses the Claude Code CLI (`claude`)
instead of the Anthropic API SDK. This allows users to leverage their
existing Claude Code subscription without requiring a separate API key.

Features:
- Support for multiple Claude models (opus, sonnet, haiku)
- Streaming via --output-format stream-json
- Non-streaming via --output-format json
- Conversation context handling
- Configurable options (max_tokens, timeout, system_prompt, etc.)
- Model aliases for convenience (cc, opus, sonnet, haiku)
- Add supports_schema = True to enable LLM's schema feature
- Implement _execute_with_schema method for schema-based responses
- Pass schema via --json-schema CLI flag
- Extract structured output from various response formats
- Update README with schema usage examples
- Use --append-system-prompt by default (preserves agentic behavior)
- Add replace_system_prompt option to use --system-prompt instead
- Remove inline "System:" prefix approach
- Update README with system prompt documentation
New options:
- add_dir: Additional directories for Claude to access
- permission_mode: Control permission prompts (default/acceptEdits/bypassPermissions)
- continue_conversation: Continue most recent conversation
- resume: Resume a specific session by ID
- cwd: Set working directory for execution
- mcp_config: Path to MCP configuration file
- verbose: Enable verbose logging

These options map to Claude Code CLI flags for headless mode automation.
- Capture session_id from Claude Code CLI responses
- Store session_id on response object for conversation continuity
- Auto-resume Claude Code sessions when continuing LLM conversations
- Remove confusing continue_conversation option (used wrong session)
- Keep resume option for manual session management
- Update documentation to reflect new behavior

When using model.conversation(), the plugin now:
1. Captures session_id from first response
2. Uses --resume on subsequent turns (instead of embedding text context)
3. Falls back to text context if session_id unavailable
- Add pytest, pytest-cov, and coverage dependencies
- Configure pytest with coverage reporting (80% threshold)
- Add comprehensive test suite with 60 tests covering:
  - Model initialization and options
  - Streaming and non-streaming execution
  - Schema support for structured output
  - Conversation session management
  - Error handling
- Achieve 90% code coverage
- Add tests.yml for CI on Python 3.10-3.14
- Add publish.yml for PyPI releases
- Resolve pyproject.toml conflicts (keep comprehensive config)
- Remove src/ directory (use flat module structure)
- Remove simple test_plugin.py (replaced by comprehensive test suite)
@btucker btucker merged commit a309fe7 into main Jan 2, 2026
5 checks passed
@btucker btucker deleted the claude/llm-plugin-claude-cli-RVNoQ branch January 2, 2026 04:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants