Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,13 @@ Documentation is available at https://kiro.dev/docs/powers/

---

### mcp-agentic
**MCP Agentic** - Connect MCP clients to ACP-compatible agents through a local MCP bridge built on stdio Bus with embedded runtime. Supports in-process agents, external worker processes, session management, and one-shot delegation.

**MCP Servers:** mcp-agentic

---

### postman
**API Testing with Postman** - Automate API testing and collection management with Postman - create workspaces, collections, environments, and run tests programmatically.

Expand Down
469 changes: 469 additions & 0 deletions mcp-agentic/POWER.md

Large diffs are not rendered by default.

22 changes: 22 additions & 0 deletions mcp-agentic/mcp.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{
"mcpServers": {
"mcp-agentic": {
"command": "npx",
"args": [
"-y",
"@stdiobus/mcp-agentic"
],
"env": {
"NODE_ENV": "production",
"LOG_LEVEL": "info"
},
"disabled": false,
"autoApprove": [
"bridge_health",
"agents_discover",
"sessions_status"
],
"disabledTools": []
}
}
}
85 changes: 85 additions & 0 deletions mcp-agentic/steering/activation-and-scope.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Activation and Scope

## When to use this power

Use the MCP Agentic power **only** when external agent delegation is actually required.

### Activate for:

- **External agent execution** — tasks that require specialized agents (in-process or worker-based)
- **Agent discovery** — finding available agents and their capabilities via `agents_discover`
- **Provider selection** — choosing the right AI provider (OpenAI, Anthropic, Gemini) for a task based on model availability or capabilities
- **Runtime parameter tuning** — dynamically adjusting temperature, model, systemPrompt, maxTokens, and other generation parameters per request via `runtimeParams`
- **Session-based delegation** — multi-step work that needs session continuity via `sessions_*` tools
- **One-shot delegation** — single tasks via `tasks_delegate`
- **Structured result collection** — retrieving formatted outputs from agents
- **Health checks** — verifying bridge readiness via `bridge_health`

### Do NOT activate for:

- **Purely local editing** — file modifications that the MCP client can handle directly
- **Direct reasoning** — analysis or planning that the MCP client can complete itself
- **Speculative orchestration** — multi-agent setups without a concrete task
- **Simple queries** — questions that don't require external agent capabilities
- **Configuration changes** — modifications to local settings or files

## Session continuity check

Before opening a new session, **always check** whether the user is continuing an existing delegated task:

1. **Review conversation history** — look for previous delegation operations
2. **Check for session references** — identify any active `sessionId` values
3. **Assess user intent** — determine if this is follow-up work
4. **Use existing session** — call `sessions_prompt` with the existing `sessionId` when continuity is intended

### Indicators of session continuity:

- User says "continue", "also", "now", "next", "then"
- Task is clearly related to previous delegation
- Same agent is referenced
- User expects context from previous interaction

### When to open a new session:

- User explicitly requests a new task
- Different agent or capability is needed
- Previous session was explicitly closed
- Task is unrelated to previous work
- Previous session has expired (TTL or idle timeout)

## Scope boundaries

This power handles **agent delegation and session management**. Specifically, it:

- Validates tool inputs via Zod schemas (prompt size, metadata size, required fields)
- Manages session lifecycle (create, prompt, status, close, cancel)
- Routes requests to the correct executor (in-process or worker)
- Enforces backpressure and input size limits
- Maps errors to MCP-compatible responses

It does NOT:

- Interpret prompt content or make decisions based on what the user asked
- Run inference, heuristics, or AI logic (that's the agent's job)
- Transform agent responses — results are passed through unchanged
- Implement ACP protocol logic — workers handle their own protocol

## Validation before activation

Before using this power, verify:

1. **Task requires external delegation** — cannot be completed by the MCP client alone
2. **Agent exists** — target capability is available (use `agents_discover`)
3. **Provider is available** — if a specific AI provider is needed, check the `providers` field in the `agents_discover` response to confirm the provider and desired model are registered
4. **Bridge is healthy** — use `bridge_health` if uncertain
5. **User intent is clear** — task requirements are well-defined

## Deactivation criteria

Stop using this power when:

- Task is complete and no follow-up is expected
- User explicitly requests to stop delegation
- All sessions have been closed
- Bridge becomes unavailable
- Task can be completed locally without delegation
257 changes: 257 additions & 0 deletions mcp-agentic/steering/configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,257 @@
# Configuration

## McpAgenticServerConfig

`McpAgenticServer` is configured via a `McpAgenticServerConfig` object passed to the constructor. There is no external config file or `AGENT_CONFIG_PATH` environment variable.

```typescript
interface McpAgenticServerConfig {
agents?: AgentHandler[]; // Pre-register in-process agents
defaultAgentId?: string; // Default agent when none specified
maxConcurrentRequests?: number; // Backpressure limit (default: 50)
maxPromptBytes?: number; // Max prompt size (default: 1 MiB)
maxMetadataBytes?: number; // Max metadata size (default: 64 KiB)
silent?: boolean; // Suppress executor stderr logging (default: false)
}
```

## Quick start with Factory API (recommended)

The Factory API is the recommended way to create providers and multi-provider agents:

```typescript
import {
McpAgenticServer,
openAI,
anthropic,
gemini,
createMultiProviderAgent,
} from '@stdiobus/mcp-agentic';

const agent = createMultiProviderAgent({
id: 'companion',
defaultProviderId: 'openai',
providers: [
openAI({ apiKey: process.env.OPENAI_API_KEY ?? '', models: ['gpt-4o'] }),
anthropic({ apiKey: process.env.ANTHROPIC_API_KEY ?? '', models: ['claude-sonnet-4-20250514'] }),
gemini({ apiKey: process.env.GOOGLE_AI_API_KEY ?? '', models: ['gemini-2.0-flash'] }),
],
capabilities: ['general'],
systemPrompt: 'You are a helpful assistant.',
defaults: { temperature: 0.7 },
});

const server = new McpAgenticServer({ defaultAgentId: 'companion' })
.register(agent);

await server.start();
```

### Factory options

Each factory accepts flat, typed options with Zod validation at call time:

| Factory | Options | Required | Optional |
|---------|---------|----------|----------|
| `openAI()` | `OpenAIOptions` | `apiKey: string`, `models: string[]` | `defaults?: RuntimeParams` |
| `anthropic()` | `AnthropicOptions` | `apiKey: string`, `models: string[]` | `defaults?: RuntimeParams` |
| `gemini()` | `GeminiOptions` | `apiKey: string`, `models: string[]` | `defaults?: RuntimeParams` |

Invalid options (empty `apiKey`, empty `models`) throw `BridgeError.config` immediately.

If the provider SDK is not installed, the factory throws `BridgeError.config` with an installation instruction.

### createMultiProviderAgent

`createMultiProviderAgent()` replaces manual `ProviderRegistry` + `MultiProviderCompanionAgent` wiring:

```typescript
interface CreateMultiProviderAgentConfig {
id: string; // Unique agent identifier
providers: AIProvider[]; // Array of provider instances
defaultProviderId: string; // Must match one provider's id
capabilities?: string[]; // Agent capabilities for discovery
systemPrompt?: string; // Default system prompt
defaults?: RuntimeParams; // Agent-level default parameters
}
```

Validation: empty `providers`, duplicate provider ids, or unknown `defaultProviderId` throw `BridgeError.config`.

### Custom providers with defineProvider

Use `defineProvider()` to create custom providers with Zod validation and discoverable metadata:

```typescript
import { defineProvider } from '@stdiobus/mcp-agentic';
import { z } from 'zod';

const myProvider = defineProvider({
id: 'my-llm',
kind: 'llm',
displayName: 'My LLM',
description: 'Custom LLM integration',
capabilities: { streaming: true, tools: false, vision: false, jsonMode: false },
schema: z.object({
apiKey: z.string().min(1),
models: z.array(z.string()).nonempty(),
}),
create: (options) => ({
id: 'my-llm',
models: options.models,
async complete(messages, params) {
// Your implementation here
return { text: '...', stopReason: 'end_turn' };
},
}),
});

// Use alongside built-in factories
const agent = createMultiProviderAgent({
id: 'companion',
defaultProviderId: 'openai',
providers: [
openAI({ apiKey: '...', models: ['gpt-4o'] }),
myProvider({ apiKey: '...', models: ['my-model'] }),
],
});
```

Static metadata (`factory.id`, `factory.kind`, `factory.schema`, `factory.capabilities`) is accessible without calling the factory. `agents_discover` automatically includes `displayName`, `description`, and `capabilities` for custom providers.

## In-process agents

Register agents programmatically using the fluent API:

```typescript
import { McpAgenticServer } from '@stdiobus/mcp-agentic';

const server = new McpAgenticServer({ defaultAgentId: 'my-agent' })
.register({
id: 'my-agent',
capabilities: ['code-analysis', 'debugging'],
async prompt(sessionId, input) {
return { text: `Response: ${input}`, stopReason: 'end_turn' };
},
});
```

### AgentHandler interface

Agents implement the `AgentHandler` interface:

```typescript
interface AgentHandler {
readonly id: string;
readonly capabilities?: string[];
prompt?(sessionId: string, input: string, opts?: PromptOpts): Promise<AgentResult>;
stream?(sessionId: string, input: string, opts?: StreamOpts): AsyncIterable<AgentEvent>;
onSessionCreate?(sessionId: string, metadata?: Record<string, unknown>): Promise<void>;
onSessionClose?(sessionId: string, reason?: string): Promise<void>;
cancel?(sessionId: string, requestId?: string): Promise<void>;
}
```

At minimum, implement `id` and either `prompt` or `stream`.

## Worker configuration

Register external worker processes via `registerWorker()`:

```typescript
server.registerWorker({
id: 'py-agent',
command: 'python',
args: ['agent.py'],
env: { OPENAI_API_KEY: process.env.OPENAI_API_KEY ?? '' },
capabilities: ['data-analysis'],
});
```

### WorkerConfig

```typescript
interface WorkerConfig {
id: string; // Unique worker/agent ID
command: string; // Executable to spawn
args: string[]; // Command-line arguments
env?: Record<string, string>; // Additional environment variables
capabilities?: string[]; // Advertised capabilities
}
```

## Backpressure

`maxConcurrentRequests` (default: 50) limits the number of in-flight tool handler calls. When the limit is reached, new requests are rejected with a retryable `BridgeError.transport('Server overloaded')`.

## Input size validation

- `maxPromptBytes` (default: 1,048,576 / 1 MiB) — maximum prompt size in bytes
- `maxMetadataBytes` (default: 65,536 / 64 KiB) — maximum metadata size in bytes (JSON-serialized)

## Session limits

The `InProcessExecutor` enforces a configurable `maxSessions` limit (default: 100). Sessions also have:

- **Session TTL** (`sessionTtlMs`, default: 3,600,000 / 1 hour) — maximum session lifetime
- **Idle timeout** (`sessionIdleMs`, default: 600,000 / 10 minutes) — maximum idle time before expiry

## RuntimeParams

### Fields

| Field | Type | Range | Description |
|-------|------|-------|-------------|
| `model` | `string` | — | Model identifier |
| `temperature` | `number` | 0–2 | Sampling temperature |
| `maxTokens` | `number` | positive int | Max tokens to generate |
| `topP` | `number` | 0–1 | Nucleus sampling |
| `topK` | `number` | positive int | Top-K sampling |
| `stopSequences` | `string[]` | — | Stop sequences |
| `systemPrompt` | `string` | — | System prompt override |
| `providerSpecific` | `Record<string, unknown>` | — | Provider-native parameters |

### Merge priority

```
ProviderConfig.defaults < session metadata.runtimeParams < prompt-level runtimeParams
```

Only defined (non-`undefined`) fields from higher-priority layers override lower ones. `providerSpecific` is shallow-merged across all layers.

## Peer dependencies

Provider SDKs are peer/optional dependencies. Install only the SDKs you need:

```bash
npm install openai # OpenAI
npm install @anthropic-ai/sdk # Anthropic
npm install @google/generative-ai # Google Gemini
```

If a provider SDK is not installed, the factory throws `BridgeError.config` with an installation instruction.

## CLI entry point

The CLI (`src/cli/server.ts`) is a reference/diagnostics server with no agents. It starts the MCP server and warns on stderr. Use `bridge_health` and `agents_discover` for diagnostics. For actual delegation, create your own entry point with `server.register()` before `server.start()`.

## Low-level API (class-based, deprecated)

The class-based API (`new OpenAIProvider(config)`, `new ProviderRegistry()`, manual wiring) still works but is deprecated. Use the Factory API above instead.

```typescript
// ⚠️ Deprecated — use openAI(), anthropic(), gemini() factories instead
import { OpenAIProvider, ProviderRegistry, MultiProviderCompanionAgent } from '@stdiobus/mcp-agentic';

const registry = new ProviderRegistry();
registry.register(new OpenAIProvider({
credentials: { apiKey: process.env.OPENAI_API_KEY ?? '' },
models: ['gpt-4o'],
}));

const agent = new MultiProviderCompanionAgent({
id: 'companion',
defaultProviderId: 'openai',
registry,
});
```
Loading