Skip to content

Feature request: response_format support for MiniMax M2.5 via OpenAI-compatible API #4

@joneslloyd

Description

@joneslloyd

Summary

MiniMax M2.5 does not support response_format (neither json_object nor json_schema) via the OpenAI-compatible chat completions endpoint. The native API documentation explicitly states this feature is "only supported by MiniMax-Text-01". This makes M2.5 unsuitable for production workloads that require structured JSON output.

Current behaviour

When sending response_format: { "type": "json_object" } to POST /v1/chat/completions with M2.5:

  • The parameter is silently ignored. No error or warning is returned.
  • The model generates free-form text that may contain JSON, but is not constrained to valid JSON.
    Additionally, when relying on prompt-based JSON instructions (the only available workaround), M2.5 frequently produces structurally malformed JSON. We observed a recurring defect across multiple production runs: the model drops the opening { brace for object elements inside arrays:
// Expected:
[{title: SEO Improvement, score: 8}, {title: Content Gap, score: 6}]
// Actual M2.5 output:
[{title: SEO Improvement, score: 8}, title: Content Gap, score: 6}]

This occurred consistently with schemas containing 10+ fields with nested objects and arrays. We had to implement a context-aware JSON repair function to work around this, which adds complexity and is inherently fragile.

Expected behaviour

response_format should be supported for M2.5 via the OpenAI-compatible API:

  1. { "type": "json_object" } should constrain output to syntactically valid JSON
  2. { "type": "json_schema", "json_schema": { "schema": ... } } should constrain output to JSON matching the provided schema
    If full json_schema mode is not feasible for M2.5, json_object mode alone would be a significant improvement; it would guarantee valid JSON syntax even without schema conformance.

Documentation gap

The OpenAI-compatible API documentation page does not mention response_format at all; it is absent from the supported parameters table and not listed among the "ignored" parameters either. The only documentation of response_format is in the native API spec, which states it is MiniMax-Text-01 only.
A clear compatibility matrix (per model, per API flavour) would help developers evaluate MiniMax models before committing to integration.

Impact

We evaluated M2.5 for a production AI pipeline processing 12 agents with structured output requirements. Despite M2.5's strong raw capabilities (204K context, competitive pricing), we abandoned it after 2 production runs due to the combination of:

  1. No response_format support (silent ignore)
  2. Consistent malformed JSON output from prompt-based workarounds
  3. Error 2013 ("invalid chat setting") when using multiple system messages (which blocked one workaround path for schema injection)
    We switched to OpenAI GPT-5-nano with native response_format support (json_schema mode with strict: true), which works reliably.

Questions

  1. Is there a roadmap for adding response_format to M2.5 via the OpenAI-compatible API?
  2. Is the malformed JSON issue (dropped opening braces) a known model-level problem?
  3. Would the MiniMax team consider adding response_format to the OpenAI-compatible endpoint for M2.5, even if only json_object mode?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions