Skip to content

ComfyUI-Kelin/ComfyUI-LLMs-Toolkit

Repository files navigation

ComfyUI-LLMs-Toolkit

Banner

Language

English 简体中文


GitHub Stars GitHub Forks GitHub Issues License Last Commit

ComfyUI custom nodes for DeepSeek, Qwen, GPT, and other OpenAI-compatible LLM APIs, with built-in tools for chat, translation, vision, and JSON workflows.


What is this?

ComfyUI-LLMs-Toolkit is a ComfyUI custom node pack for connecting DeepSeek, Qwen, GPT, Moonshot, and other OpenAI-compatible LLM APIs directly inside ComfyUI workflows.

Use it to build LLM-powered ComfyUI workflows for prompt generation, translation, structured JSON output, image understanding with multimodal models, and other text or vision automation tasks without running a local GPU model.

Key Features

  • Built-in Provider Manager — Manage all your API providers visually from ComfyUI's menu bar. No config files to edit.
  • 12 Providers Pre-configured — Qwen, DeepSeek, GLM, Doubao, Spark, Moonshot, Baichuan, MiniMax, StepFun, SenseChat, iFlow, ModelScope.
  • Smart Model Dropdown — Select a provider, and the model list updates automatically.
  • Vision Support — Send images to multimodal LLMs with the Image Preprocessor node.
  • Won't Crash Your Workflow — If an API call fails, you get a readable error message instead of a broken workflow.
  • Multi-turn Memory — Enable conversation memory for chat-style interactions.

Installation

Option A: ComfyUI Manager (Recommended)

  1. Open ComfyUI Manager
  2. Search for ComfyUI-LLMs-Toolkit
  3. Click InstallRestart ComfyUI

Option B: Manual Install

cd ComfyUI/custom_nodes/
git clone https://github.com/ComfyUI-Kelin/ComfyUI-LLMs-Toolkit.git
cd ComfyUI-LLMs-Toolkit
pip install -r requirements.txt

Then restart ComfyUI.


Getting Started

Step 1: Open the Provider Manager

After installation, you'll see a LLMs_Manager button in ComfyUI's top menu bar. Click it to open the settings panel.

Step 2: Set Up Your Provider

  1. Pick a provider from the left sidebar (e.g., DeepSeek)
  2. Enter your API Key (get one from the provider's website)
  3. Click Check API to make sure it works ✅
  4. Add or edit the models you want to use
  5. Click Save and toggle Enable in Nodes on

Step 3: Add Nodes to Your Workflow

  1. Right-click in ComfyUI → Add Node🚦ComfyUI_LLMs_Toolkit
  2. Add an OpenAI Compatible Adapter node
  3. Select your provider and model from the dropdowns
  4. Type your prompt, connect the output, and hit Queue!

Where to Get API Keys

Provider Website Free Credits
DeepSeek platform.deepseek.com ¥500 free
Qwen dashscope.aliyun.com 1M tokens/month
GLM open.bigmodel.cn 5M tokens/month
Moonshot platform.moonshot.cn Free trial
OpenAI platform.openai.com $5 trial

Available Nodes

LLM Nodes

Node What it does
OpenAI Compatible Adapter The main node — send prompts to any OpenAI-compatible LLM and get text responses. Supports system prompts, multi-turn memory, and vision input.
LLMs Loader Helper node for advanced config (outputs provider settings as a connection).
LLM Translator Quick one-shot translation using any configured LLM.

Vision

Node What it does
Image Preprocessor Converts ComfyUI images to a format that vision LLMs can understand. Connect it to the adapter node's prep_img input.

JSON Tools

Node What it does
JSON Builder (Simple / Medium / Large) Create structured JSON data with 1, 5, or 10 key-value pairs.
JSON Combine Merge multiple JSON objects into one.
JSON Extractor Pull specific values out of a JSON string.
JSON Fixer Automatically repair broken JSON that LLMs sometimes output.

Text Tools

Node What it does
String Template Fill in template strings with variables, e.g., "Hello {name}!""Hello Alice!"

FAQ

My workflow keeps saying "API Key is missing"

Make sure you've:

  1. Opened the LLMs_Manager panel
  2. Selected your provider and entered the API key
  3. Clicked Save
  4. Toggled Enable in Nodes to ON
Can I use local models like Ollama?

Yes! Add a Custom Provider in the LLMs_Manager, set the Base URL to your local endpoint (e.g., http://localhost:11434/v1), and add your model names. Any OpenAI-compatible API works.

The model dropdown shows wrong models

Make sure to refresh the browser (Cmd+R / Ctrl+R) after making changes in the Provider Manager. The model list updates dynamically based on the selected provider.

Where is the troubleshooting guide?

See docs/troubleshooting.md for dependency conflicts, provider/model mismatch, and local endpoint debugging.


A Note on Security

Your API keys are stored locally on your machine in config/providers.json. This file is excluded from Git by default, so your keys won't be accidentally shared if you push your code. Just be careful not to share this file manually.


Changelog

v1.2.1 — 2026-03-07

  • Sync package version metadata to match current release line
  • Add docs/troubleshooting.md for common dependency/provider setup issues

v1.2.0 — 2026-03-02

  • Built-in Provider Manager UI with Cherry-Studio-style design
  • Dynamic model filtering by provider
  • Custom modal dialogs replacing native browser prompts
  • Simplified node interface (removed redundant custom inputs)

v1.1.0 — 2026-03-01

  • DeepSeek reasoning content extraction
  • o1/o3 model system role compatibility
  • Shared API client with smart retry
  • Graceful error handling (no more workflow crashes)
  • Fixed multi-turn memory and token display bugs

License

GNU Affero General Public License v3.0 (AGPL-3.0) — Free to use, modify, and share. If you modify or include this code in a service over a network, you must make the complete source code of your changes open and available under the same license.


If this project helps you, please give it a Star!

Star History Chart

GitHub

Made with ❤️ for the ComfyUI community

About

ComfyUI custom nodes for DeepSeek, Qwen, GPT, and other OpenAI-compatible LLM APIs, with tools for chat, translation, vision, and JSON workflows.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages