Skip to content

feat: add MiniMax as first-class LLM provider#1850

Open
octo-patch wants to merge 3 commits intobytedance:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#1850
octo-patch wants to merge 3 commits intobytedance:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider in the Tarko LLM client, enabling users to seamlessly use MiniMax M2.5 and M2.5-highspeed models (204K context window) via the OpenAI-compatible API.

Changes

  • New handler (llm-client/src/handlers/minimax.ts): Dedicated MiniMax handler with temperature validation (must be in (0.0, 1.0]) and auto-default to 1.0
  • Model registry (llm-client/src/models.ts): MiniMax-M2.5 and MiniMax-M2.5-highspeed with streaming, images, and tool call support
  • Handler registration (llm-client/src/handlers/utils.ts): Wire MiniMax handler into the provider factory
  • Type definitions (llm-client/src/chat/index.ts): MiniMaxModel type and provider map entry
  • Documentation: Updated README, English and Chinese model-provider guides with MiniMax configuration examples
  • Tests: Model resolver test, integration tests (resolve + create client), llm-client tests, totaling 6 new test cases
  • Example (agent/examples/model-providers/minimax.ts): Usage example following existing patterns

Usage

const agent = new Agent({
  model: {
    provider: 'minimax',
    apiKey: process.env.MINIMAX_API_KEY,
    id: 'MiniMax-M2.5',
  },
});

MiniMax API Notes

  • Base URL: https://api.minimax.io/v1 (OpenAI-compatible)
  • Temperature must be in (0.0, 1.0] — zero is rejected
  • Both models support 204K context window

Test plan

  • All 50 existing + new unit tests pass (pnpm test in model-provider)
  • Model resolver correctly identifies minimax as a native base provider
  • Integration test: resolveModel + createLLMClient works for MiniMax
  • LLM client test: verifies correct TokenJS configuration and model list extension
  • Manual verification with MiniMax API key (optional)

PR Bot added 2 commits March 15, 2026 22:22
- Add MiniMax handler with OpenAI-compatible API integration
- Register MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context)
- Add temperature validation (MiniMax requires range (0.0, 1.0])
- Add MINIMAX_API_KEY environment variable support
- Add unit test for MiniMax model resolution
- Update model-provider docs (EN/ZH) with MiniMax configuration
Add integration tests, llm-client tests, and usage example for MiniMax.
@netlify
Copy link
Copy Markdown

netlify bot commented Mar 17, 2026

Deploy Preview for tarko ready!

Name Link
🔨 Latest commit b098246
🔍 Latest deploy log https://app.netlify.com/projects/tarko/deploys/69babd7fc881a90008f2691d
😎 Deploy Preview https://deploy-preview-1850--tarko.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@netlify
Copy link
Copy Markdown

netlify bot commented Mar 17, 2026

Deploy Preview for agent-tars-docs ready!

Name Link
🔨 Latest commit b098246
🔍 Latest deploy log https://app.netlify.com/projects/agent-tars-docs/deploys/69babd7fc39dfc00080c04bd
😎 Deploy Preview https://deploy-preview-1850--agent-tars-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model
- Keep all previous models (M2.5, M2.5-highspeed) as alternatives
- Update example, docs (en/zh), and integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants