Skip to content

feat: add MiniMax as a new LLM provider#11367

Open
octo-patch wants to merge 2 commits intocontinuedev:mainfrom
octo-patch:add-minimax-provider
Open

feat: add MiniMax as a new LLM provider#11367
octo-patch wants to merge 2 commits intocontinuedev:mainfrom
octo-patch:add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 12, 2026

Summary

  • Add MiniMax as a new LLM provider with OpenAI-compatible API support
  • Register MiniMax-M2.5 (204K context) and MiniMax-M2.5-highspeed models
  • Include GUI model selection, provider configuration, and documentation

Changes

New Files

  • core/llm/llms/MiniMax.ts — Provider class extending OpenAI with:
    • Temperature clamping to (0, 1] range (MiniMax rejects 0)
    • response_format removal (unsupported)
    • Default API base: https://api.minimax.io/v1/
  • packages/llm-info/src/providers/minimax.ts — Model metadata
  • docs/customize/model-providers/more/minimax.mdx — Provider documentation

Modified Files

  • core/llm/llms/index.ts — Register in LLMClasses
  • packages/openai-adapters/src/index.ts — Register OpenAI-compatible adapter
  • packages/config-types/src/index.ts — Add "minimax" to provider enum
  • packages/llm-info/src/index.ts — Add to allModelProviders
  • gui/src/pages/AddNewModel/configs/models.ts — Model entries
  • gui/src/pages/AddNewModel/configs/providers.ts — Provider config with API key setup
  • docs/customize/model-providers/overview.mdx — Add to hosted services table

Models

Model Context Max Output Description
MiniMax-M2.5 204,800 192,000 Peak performance, complex reasoning
MiniMax-M2.5-highspeed 204,800 192,000 Same performance, lower latency

Test Plan

  • Verified MiniMax API connectivity and response format
  • Verified temperature constraint handling
  • Code follows existing provider patterns (Groq, DeepSeek, Mistral)

Summary by cubic

Add MiniMax as an OpenAI-compatible provider with GUI and docs, including MiniMax-M2.5 and MiniMax-M2.5-highspeed (204K context). Adds a dedicated MiniMaxApi adapter to enforce MiniMax-specific request rules.

  • New Features

    • New minimax provider using https://api.minimax.io/v1/ via a dedicated MiniMaxApi (OpenAI-compatible) with temperature clamping and response_format removal.
    • Registers provider and models in LLMClasses, config types, and llm-info.
    • GUI: provider setup with API key and model entries for MiniMax-M2.5 and MiniMax-M2.5-highspeed.
    • Docs: provider guide and overview entry.
  • Migration

    • Configure provider: minimax with your API key, or set MINIMAX_API_KEY.
    • For China, set apiBase to https://api.minimaxi.com/v1/.

Written for commit 8104b5b. Summary will update on new commits.

Add MiniMax (https://platform.minimax.io) as a new LLM provider with
OpenAI-compatible API support.

Changes:
- Add MiniMax LLM provider class extending OpenAI with temperature
  clamping (must be in (0, 1]) and response_format removal
- Register provider in LLMClasses, openai-adapters, and config-types
- Add model info for MiniMax-M2.5 and MiniMax-M2.5-highspeed
  (204K context, 192K max output)
- Add GUI model selection entries and provider configuration
- Add provider documentation page
@octo-patch octo-patch requested a review from a team as a code owner March 12, 2026 23:18
@octo-patch octo-patch requested review from Patrick-Erichsen and removed request for a team March 12, 2026 23:18
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Mar 12, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2026


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


1 out of 2 committers have signed the CLA.
✅ (octo-patch)[https://github.com/octo-patch]
@pr Bot
PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 10 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/openai-adapters/src/index.ts">

<violation number="1" location="packages/openai-adapters/src/index.ts:145">
P1: MiniMax is wired to generic `OpenAIApi`, which skips the repo’s MiniMax-specific request fixes (temperature clamping and `response_format` removal), creating a real incompatibility path in adapter-based runtime flows.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@octo-patch
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

The minimax provider was wired to the generic OpenAIApi via
openAICompatible(), which skips MiniMax-specific request fixes.
This creates a dedicated MiniMaxApi adapter class that overrides
modifyChatBody to apply temperature clamping (MiniMax requires
temperature in (0.0, 1.0]) and response_format removal, matching
the adaptations already present in core/llm/llms/MiniMax.ts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

1 participant