Language
ComfyUI custom nodes for DeepSeek, Qwen, GPT, and other OpenAI-compatible LLM APIs, with built-in tools for chat, translation, vision, and JSON workflows.
ComfyUI-LLMs-Toolkit is a ComfyUI custom node pack for connecting DeepSeek, Qwen, GPT, Moonshot, and other OpenAI-compatible LLM APIs directly inside ComfyUI workflows.
Use it to build LLM-powered ComfyUI workflows for prompt generation, translation, structured JSON output, image understanding with multimodal models, and other text or vision automation tasks without running a local GPU model.
- Built-in Provider Manager — Manage all your API providers visually from ComfyUI's menu bar. No config files to edit.
- 12 Providers Pre-configured — Qwen, DeepSeek, GLM, Doubao, Spark, Moonshot, Baichuan, MiniMax, StepFun, SenseChat, iFlow, ModelScope.
- Smart Model Dropdown — Select a provider, and the model list updates automatically.
- Vision Support — Send images to multimodal LLMs with the Image Preprocessor node.
- Won't Crash Your Workflow — If an API call fails, you get a readable error message instead of a broken workflow.
- Multi-turn Memory — Enable conversation memory for chat-style interactions.
- Open ComfyUI Manager
- Search for
ComfyUI-LLMs-Toolkit - Click Install → Restart ComfyUI
cd ComfyUI/custom_nodes/
git clone https://github.com/ComfyUI-Kelin/ComfyUI-LLMs-Toolkit.git
cd ComfyUI-LLMs-Toolkit
pip install -r requirements.txtThen restart ComfyUI.
After installation, you'll see a LLMs_Manager button in ComfyUI's top menu bar. Click it to open the settings panel.
- Pick a provider from the left sidebar (e.g., DeepSeek)
- Enter your API Key (get one from the provider's website)
- Click Check API to make sure it works ✅
- Add or edit the models you want to use
- Click Save and toggle Enable in Nodes on
- Right-click in ComfyUI →
Add Node→🚦ComfyUI_LLMs_Toolkit - Add an OpenAI Compatible Adapter node
- Select your provider and model from the dropdowns
- Type your prompt, connect the output, and hit Queue!
| Provider | Website | Free Credits |
|---|---|---|
| DeepSeek | platform.deepseek.com | ¥500 free |
| Qwen | dashscope.aliyun.com | 1M tokens/month |
| GLM | open.bigmodel.cn | 5M tokens/month |
| Moonshot | platform.moonshot.cn | Free trial |
| OpenAI | platform.openai.com | $5 trial |
| Node | What it does |
|---|---|
| OpenAI Compatible Adapter | The main node — send prompts to any OpenAI-compatible LLM and get text responses. Supports system prompts, multi-turn memory, and vision input. |
| LLMs Loader | Helper node for advanced config (outputs provider settings as a connection). |
| LLM Translator | Quick one-shot translation using any configured LLM. |
| Node | What it does |
|---|---|
| Image Preprocessor | Converts ComfyUI images to a format that vision LLMs can understand. Connect it to the adapter node's prep_img input. |
| Node | What it does |
|---|---|
| JSON Builder (Simple / Medium / Large) | Create structured JSON data with 1, 5, or 10 key-value pairs. |
| JSON Combine | Merge multiple JSON objects into one. |
| JSON Extractor | Pull specific values out of a JSON string. |
| JSON Fixer | Automatically repair broken JSON that LLMs sometimes output. |
| Node | What it does |
|---|---|
| String Template | Fill in template strings with variables, e.g., "Hello {name}!" → "Hello Alice!" |
My workflow keeps saying "API Key is missing"
Make sure you've:
- Opened the LLMs_Manager panel
- Selected your provider and entered the API key
- Clicked Save
- Toggled Enable in Nodes to ON
Can I use local models like Ollama?
Yes! Add a Custom Provider in the LLMs_Manager, set the Base URL to your local endpoint (e.g., http://localhost:11434/v1), and add your model names. Any OpenAI-compatible API works.
The model dropdown shows wrong models
Make sure to refresh the browser (Cmd+R / Ctrl+R) after making changes in the Provider Manager. The model list updates dynamically based on the selected provider.
Where is the troubleshooting guide?
See docs/troubleshooting.md for dependency conflicts, provider/model mismatch, and local endpoint debugging.
Your API keys are stored locally on your machine in config/providers.json. This file is excluded from Git by default, so your keys won't be accidentally shared if you push your code. Just be careful not to share this file manually.
- Sync package version metadata to match current release line
- Add
docs/troubleshooting.mdfor common dependency/provider setup issues
- Built-in Provider Manager UI with Cherry-Studio-style design
- Dynamic model filtering by provider
- Custom modal dialogs replacing native browser prompts
- Simplified node interface (removed redundant custom inputs)
- DeepSeek reasoning content extraction
- o1/o3 model system role compatibility
- Shared API client with smart retry
- Graceful error handling (no more workflow crashes)
- Fixed multi-turn memory and token display bugs
GNU Affero General Public License v3.0 (AGPL-3.0) — Free to use, modify, and share. If you modify or include this code in a service over a network, you must make the complete source code of your changes open and available under the same license.
