豆豆友情提示:这是一个非官方 GitHub 代理镜像,主要用于网络测试或访问加速。请勿在此进行登录、注册或处理任何敏感信息。进行这些操作请务必访问官方网站 github.com。 Raw 内容也通过此代理提供。
Skip to content

feat(providers): upgrade MiniMax provider to support M2.7 models#1394

Open
octo-patch wants to merge 3 commits intoruvnet:mainfrom
octo-patch:feature/minimax-m27-upgrade
Open

feat(providers): upgrade MiniMax provider to support M2.7 models#1394
octo-patch wants to merge 3 commits intoruvnet:mainfrom
octo-patch:feature/minimax-m27-upgrade

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax-M2.7 and MiniMax-M2.7-highspeed models alongside existing M2.5 models
  • M2.7 features a 1M token context window (vs 204K for M2.5) and 131K max output tokens
  • Fix temperature clamping to allow temperature=0 (now accepted by MiniMax API)
  • Remove unnecessary default temperature=1.0 when not specified by caller

Changes

File Change
minimax-provider.ts Add M2.7/M2.7-highspeed to supported models, fix temp clamping
types.ts Add M2.7 model type definitions
index.ts Update module documentation
README.md Update MiniMax API key description
minimax-provider.test.ts 24 new unit tests (capabilities, model info, temp, errors, responses)
provider-integration.test.ts Add M2.7 integration tests (completion, streaming, model listing)

Model Specifications

Model Context Max Output Speed
MiniMax-M2.7 1,048,576 131,072 Standard
MiniMax-M2.7-highspeed 1,048,576 131,072 Fast
MiniMax-M2.5 204,800 192,000 Standard
MiniMax-M2.5-highspeed 204,800 192,000 Fast

Test Plan

  • 24 unit tests passing (capabilities, model info, temperature clamping, error handling, response transformation)
  • 4 integration tests passing with real MiniMax API (M2.7 completion, M2.5 completion, M2.7 streaming, model listing)
  • All existing tests unaffected

Test Results

✓ @claude-flow/providers/src/__tests__/minimax-provider.test.ts (24 tests) 294ms
✓ @claude-flow/providers/src/__tests__/provider-integration.test.ts (4 passed | 10 skipped) 11598ms

PR Bot added 3 commits March 16, 2026 09:00
- Add MiniMaxProvider with OpenAI-compatible API integration
- Support MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context)
- Handle temperature constraint (must be in (0.0, 1.0])
- Register provider in ProviderManager and export from index
- Add MiniMax models to LLMModel type union
- Add integration tests for completion, streaming, and model listing
- Update README with MiniMax in provider list and env variables
Add MiniMax-M2.7 and MiniMax-M2.7-highspeed models alongside existing
M2.5 models. M2.7 features a 1M token context window (vs 204K for M2.5)
and 131K max output tokens.

Changes:
- Add M2.7/M2.7-highspeed to supported models with correct specs
- Fix temperature clamping to allow temperature=0 (now accepted by API)
- Remove unnecessary default temperature=1.0 when not specified
- Add 24 unit tests for capabilities, model info, temp clamping, errors
- Add M2.7 integration tests (completion, streaming, model listing)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant