Skip to content

Conversation

@annie444
Copy link

Non-OpenAI APIs that implement a similar interface don't always include the output[].content[].annotations JSON path. Specifically, I ran into this issue with LM Studio, which doesn't include that attribute.

This pull request makes a minor change to the OutputTextContent struct in async-openai/src/types/responses/response.rs. The change ensures that the annotations field will now default to an empty vector if it is missing during deserialization, which improves robustness when handling API responses for non-OpenAI APIs.

…ude the `output[].content[].annotations` path
Copilot AI review requested due to automatic review settings November 21, 2025 23:20
Copilot finished reviewing on behalf of annie444 November 21, 2025 23:21
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR improves compatibility with non-OpenAI APIs that implement OpenAI-compatible interfaces (such as LM Studio) by making the annotations field in OutputTextContent optional during deserialization. The change uses the #[serde(default)] attribute to ensure that if the field is missing in the API response, it will default to an empty vector instead of causing a deserialization error.

Key Changes

  • Added #[serde(default)] attribute to the annotations field in the OutputTextContent struct to allow it to be missing from API responses while defaulting to an empty Vec

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@annie444 annie444 marked this pull request as draft November 21, 2025 23:42
@annie444 annie444 marked this pull request as ready for review November 25, 2025 18:11
@annie444 annie444 requested a review from Copilot November 25, 2025 18:11
Copilot finished reviewing on behalf of annie444 November 25, 2025 18:14
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 2360 to 2381
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)]
pub struct OutputTokenDetails {
/// The number of reasoning tokens.
pub reasoning_tokens: u32,
}

/// Usage statistics for a response.
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)]
#[serde(default)]
pub struct ResponseUsage {
/// The number of input tokens.
pub input_tokens: u32,
/// A detailed breakdown of the input tokens.
pub input_tokens_details: InputTokenDetails,
/// The number of output tokens.
pub output_tokens: u32,
/// A detailed breakdown of the output tokens.
pub output_tokens_details: OutputTokenDetails,
/// The total number of tokens used.
pub total_tokens: u32,
}

Copy link

Copilot AI Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These structs (InputTokenDetails, OutputTokenDetails, and ResponseUsage) already exist in async-openai/src/types/shared/response_usage.rs and are re-exported by the responses module in mod.rs (lines 23-24, 29).

The duplicate definitions should be removed to avoid:

  1. Maintenance issues with two versions of the same types
  2. Potential type conflicts and compilation errors
  3. Confusion about which version to use

Instead, you should use the existing shared types that are already imported at the top of this file (line 5: ResponseUsage).

Suggested change
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)]
pub struct OutputTokenDetails {
/// The number of reasoning tokens.
pub reasoning_tokens: u32,
}
/// Usage statistics for a response.
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)]
#[serde(default)]
pub struct ResponseUsage {
/// The number of input tokens.
pub input_tokens: u32,
/// A detailed breakdown of the input tokens.
pub input_tokens_details: InputTokenDetails,
/// The number of output tokens.
pub output_tokens: u32,
/// A detailed breakdown of the output tokens.
pub output_tokens_details: OutputTokenDetails,
/// The total number of tokens used.
pub total_tokens: u32,
}

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant