Conversation
| - A provider account and API key exported in your shell: | ||
| - Fireworks: `FIREWORKS_API_KEY` | ||
| - Parasail: `PARASAIL_API_KEY` | ||
| - For Fireworks, the model must exist on your local disk (HuggingFace download or an Oumi training output). |
There was a problem hiding this comment.
Not sure if this statement is 100% given PR https://github.com/oumi-ai/oumi/pull/2360/changes.
on the other hand, we have not exposed that functionality throug the CLI so I think we are ok
|
|
||
| | Tool | Purpose | | ||
| |----------------------|--------------------------------------------------------------| | ||
| | `search_configs` | Fuzzy-search the ~500 Oumi YAML configs by path + content | | ||
| | `get_config` | Fetch full details (path, model, dataset, raw YAML) for one config | | ||
| | `launch_job` | Launch an Oumi job locally or on a cloud provider | | ||
| | `poll_status` | Poll a running job's status | | ||
| | `stop_cluster` / `down_cluster` | Manage SkyPilot clusters | | ||
| | `cancel_job` | Cancel a running job | | ||
| | `fetch_logs` | Retrieve logs for a job | | ||
| | `list_running_jobs` / `list_completed_jobs` | Inventory of tracked jobs | |
There was a problem hiding this comment.
Some of the tools in this doc is wrong and some are missing.
Wrong tool names:
| In the doc | Actual name in server.py |
|---|---|
launch_job |
run_oumi_job |
poll_status |
get_job_status |
fetch_logs |
get_job_logs |
list_running_jobs / list_completed_jobs |
list_jobs (these two are resources, not tools — jobs://running, jobs://completed) |
Missing tools:
get_started— the onboarding tool the prose below the table already tells users to call firstpre_flight_check— HF auth / hardware / local-path validation before launchvalidate_config— YAML schema validationlist_categories— config categories, model families, API providersget_docs— searches indexed Oumi API docslist_modules— companion listing forget_docs
| **Resources** (workflow guidance strings the assistant can read) | ||
|
|
||
| - `guidance://mle-workflow` — overall ML engineering workflow | ||
| - `guidance://mle-train`, `mle-synth`, `mle-analyze`, `mle-eval`, `mle-infer` — per-command guidance | ||
| - `guidance://cloud-launch` — cloud job anatomy and setup patterns | ||
| - `guidance://post-training` — post-training steps (download weights, eval, teardown) |
There was a problem hiding this comment.
The server also exposes:
jobs://runningjobs://completedjobs://{job_id}/logs
| - `guidance://cloud-launch` — cloud job anatomy and setup patterns | ||
| - `guidance://post-training` — post-training steps (download weights, eval, teardown) | ||
|
|
||
| **Prompts** (pre-built prompt templates for common tasks). See the source under `oumi.mcp.prompts` for the current list. |
There was a problem hiding this comment.
There are no mcp prompts registers, this can be removed.
| ## Path Rules (Important) | ||
|
|
||
| Because the server executes real commands against the user's machine or cloud account, it's strict about paths: | ||
|
|
||
| - Every path-sensitive tool requires `client_cwd` — the user's project root. | ||
| - Config paths may be absolute or relative to `client_cwd`. | ||
| - **Local jobs**: the subprocess runs from `client_cwd`; paths inside the YAML resolve against that directory. | ||
| - **Cloud jobs**: `client_cwd` becomes `working_dir` on the remote VM. Use repo-relative paths in the YAML — never local-machine absolute paths. | ||
|
|
||
| Assistants should always call the `get_started()` tool first (returned by the server) to retrieve the up-to-date tool catalog and workflow before doing anything else. |
There was a problem hiding this comment.
This is not needed, this information is present in get_started() and MCP instructions already.
| `oumi-mcp` speaks stdio MCP by default and logs to stderr. To inspect traffic, run it from a terminal and set `OUMI_LOG_LEVEL=DEBUG`: | ||
|
|
||
| ```bash | ||
| OUMI_LOG_LEVEL=DEBUG oumi-mcp | ||
| ``` | ||
|
|
||
| Then configure the client to spawn it with the same environment. |
There was a problem hiding this comment.
Nothing currently reads oumi_log_level, this can be removed or reworded as:
oumi-mcp uses stdio transport — protocol on stdout, logs on stderr at INFO. Most MCP clients swallow stderr; to see logs, launch the server from a terminal.
Description
Related issues
Fixes # (issue)
Before submitting
Reviewers
At least one review from a member of
oumi-ai/oumi-staffis required.