-
-
Notifications
You must be signed in to change notification settings - Fork 338
Handling non-OpenAI APIs #490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ude the `output[].content[].annotations` path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR improves compatibility with non-OpenAI APIs that implement OpenAI-compatible interfaces (such as LM Studio) by making the annotations field in OutputTextContent optional during deserialization. The change uses the #[serde(default)] attribute to ensure that if the field is missing in the API response, it will default to an empty vector instead of causing a deserialization error.
Key Changes
- Added
#[serde(default)]attribute to theannotationsfield in theOutputTextContentstruct to allow it to be missing from API responses while defaulting to an empty Vec
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | ||
| pub struct OutputTokenDetails { | ||
| /// The number of reasoning tokens. | ||
| pub reasoning_tokens: u32, | ||
| } | ||
|
|
||
| /// Usage statistics for a response. | ||
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | ||
| #[serde(default)] | ||
| pub struct ResponseUsage { | ||
| /// The number of input tokens. | ||
| pub input_tokens: u32, | ||
| /// A detailed breakdown of the input tokens. | ||
| pub input_tokens_details: InputTokenDetails, | ||
| /// The number of output tokens. | ||
| pub output_tokens: u32, | ||
| /// A detailed breakdown of the output tokens. | ||
| pub output_tokens_details: OutputTokenDetails, | ||
| /// The total number of tokens used. | ||
| pub total_tokens: u32, | ||
| } | ||
|
|
Copilot
AI
Nov 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These structs (InputTokenDetails, OutputTokenDetails, and ResponseUsage) already exist in async-openai/src/types/shared/response_usage.rs and are re-exported by the responses module in mod.rs (lines 23-24, 29).
The duplicate definitions should be removed to avoid:
- Maintenance issues with two versions of the same types
- Potential type conflicts and compilation errors
- Confusion about which version to use
Instead, you should use the existing shared types that are already imported at the top of this file (line 5: ResponseUsage).
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | |
| pub struct OutputTokenDetails { | |
| /// The number of reasoning tokens. | |
| pub reasoning_tokens: u32, | |
| } | |
| /// Usage statistics for a response. | |
| #[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Default)] | |
| #[serde(default)] | |
| pub struct ResponseUsage { | |
| /// The number of input tokens. | |
| pub input_tokens: u32, | |
| /// A detailed breakdown of the input tokens. | |
| pub input_tokens_details: InputTokenDetails, | |
| /// The number of output tokens. | |
| pub output_tokens: u32, | |
| /// A detailed breakdown of the output tokens. | |
| pub output_tokens_details: OutputTokenDetails, | |
| /// The total number of tokens used. | |
| pub total_tokens: u32, | |
| } |
Non-OpenAI APIs that implement a similar interface don't always include the
output[].content[].annotationsJSON path. Specifically, I ran into this issue with LM Studio, which doesn't include that attribute.This pull request makes a minor change to the
OutputTextContentstruct inasync-openai/src/types/responses/response.rs. The change ensures that theannotationsfield will now default to an empty vector if it is missing during deserialization, which improves robustness when handling API responses for non-OpenAI APIs.