-
Notifications
You must be signed in to change notification settings - Fork 45
Description
Summary
MiniMax M2.5 does not support response_format (neither json_object nor json_schema) via the OpenAI-compatible chat completions endpoint. The native API documentation explicitly states this feature is "only supported by MiniMax-Text-01". This makes M2.5 unsuitable for production workloads that require structured JSON output.
Current behaviour
When sending response_format: { "type": "json_object" } to POST /v1/chat/completions with M2.5:
- The parameter is silently ignored. No error or warning is returned.
- The model generates free-form text that may contain JSON, but is not constrained to valid JSON.
Additionally, when relying on prompt-based JSON instructions (the only available workaround), M2.5 frequently produces structurally malformed JSON. We observed a recurring defect across multiple production runs: the model drops the opening{brace for object elements inside arrays:
// Expected:
[{title: SEO Improvement, score: 8}, {title: Content Gap, score: 6}]
// Actual M2.5 output:
[{title: SEO Improvement, score: 8}, title: Content Gap, score: 6}]
This occurred consistently with schemas containing 10+ fields with nested objects and arrays. We had to implement a context-aware JSON repair function to work around this, which adds complexity and is inherently fragile.
Expected behaviour
response_format should be supported for M2.5 via the OpenAI-compatible API:
{ "type": "json_object" }should constrain output to syntactically valid JSON{ "type": "json_schema", "json_schema": { "schema": ... } }should constrain output to JSON matching the provided schema
If fulljson_schemamode is not feasible for M2.5,json_objectmode alone would be a significant improvement; it would guarantee valid JSON syntax even without schema conformance.
Documentation gap
The OpenAI-compatible API documentation page does not mention response_format at all; it is absent from the supported parameters table and not listed among the "ignored" parameters either. The only documentation of response_format is in the native API spec, which states it is MiniMax-Text-01 only.
A clear compatibility matrix (per model, per API flavour) would help developers evaluate MiniMax models before committing to integration.
Impact
We evaluated M2.5 for a production AI pipeline processing 12 agents with structured output requirements. Despite M2.5's strong raw capabilities (204K context, competitive pricing), we abandoned it after 2 production runs due to the combination of:
- No
response_formatsupport (silent ignore) - Consistent malformed JSON output from prompt-based workarounds
- Error 2013 (
"invalid chat setting") when using multiple system messages (which blocked one workaround path for schema injection)
We switched to OpenAI GPT-5-nano with nativeresponse_formatsupport (json_schemamode withstrict: true), which works reliably.
Questions
- Is there a roadmap for adding
response_formatto M2.5 via the OpenAI-compatible API? - Is the malformed JSON issue (dropped opening braces) a known model-level problem?
- Would the MiniMax team consider adding
response_formatto the OpenAI-compatible endpoint for M2.5, even if onlyjson_objectmode?