Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,32 @@ interpreter.llm.model = "gpt-3.5-turbo"

[Find the appropriate "model" string for your language model here.](https://docs.litellm.ai/docs/providers/)

### Using MiniMax

To use [MiniMax](https://platform.minimax.io/) models, set your API key and use the built-in profile:

```shell
export MINIMAX_API_KEY=your_api_key
interpreter --profile minimax.py
```

Or configure it in Python:

```python
from interpreter import interpreter

interpreter.llm.model = "openai/MiniMax-M2.7"
interpreter.llm.api_key = "your_api_key"
interpreter.llm.api_base = "https://api.minimax.io/v1"
interpreter.llm.context_window = 204800
interpreter.llm.max_tokens = 4096
interpreter.llm.temperature = 1.0

interpreter.chat()
```

Available models: `MiniMax-M2.7` (default), `MiniMax-M2.7-highspeed` (faster), `MiniMax-M2.5`, and `MiniMax-M2.5-highspeed`.

### Running Open Interpreter locally

#### Terminal
Expand Down
34 changes: 34 additions & 0 deletions interpreter/terminal_interface/profiles/defaults/minimax.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
"""
This is an Open Interpreter profile to use MiniMax.

Please set the MINIMAX_API_KEY environment variable.

MiniMax provides powerful language models with a 204,800 token context window.
Available models:
- MiniMax-M2.7 (default): Latest flagship model with enhanced reasoning and coding.
- MiniMax-M2.7-highspeed: High-speed version of M2.7 for low-latency scenarios.
- MiniMax-M2.5: Peak Performance. Ultimate Value.
- MiniMax-M2.5-highspeed: Same performance, faster and more agile.

See https://platform.minimax.io/docs/api-reference/text-openai-api for more information.
"""

from interpreter import interpreter
import os

# LLM settings
interpreter.llm.model = "openai/MiniMax-M2.7"
interpreter.llm.api_key = os.environ.get("MINIMAX_API_KEY")
interpreter.llm.api_base = "https://api.minimax.io/v1"
interpreter.llm.supports_functions = True
interpreter.llm.supports_vision = False
interpreter.llm.max_tokens = 4096
interpreter.llm.context_window = 204800
interpreter.llm.temperature = 1.0

# Computer settings
interpreter.computer.import_computer_api = True

# Misc settings
interpreter.offline = False
interpreter.auto_run = False