chore: merge upstream gemini-cli v0.29.7 & v0.30.0#31
Open
chore: merge upstream gemini-cli v0.29.7 & v0.30.0#31
Conversation
Co-authored-by: Gal Zahavi <38544478+galz10@users.noreply.github.com>
…ly and thus crashed when undefined... Fixes google-gemini#18076 (google-gemini#18099)
…ing (google-gemini#125) * feat(core,ui): support Gemini 3.1 Pro Preview and active model filtering * fix(core,ui): use optional chaining for config methods to support mocks * fix(core): clear stale authType in refreshAuth to avoid incorrect model resolution * do not show gemini 3.1 model when users do not have access to gemini 3.1 in stats
…ing (google-gemini#125) * feat(core,ui): support Gemini 3.1 Pro Preview and active model filtering * fix(core,ui): use optional chaining for config methods to support mocks * fix(core): clear stale authType in refreshAuth to avoid incorrect model resolution * do not show gemini 3.1 model when users do not have access to gemini 3.1 in stats
* feat(models): add support for Gemini 3.1 and custom tool models * test(routing): fix classifier and numerical classifier strategy tests * test(routing): add Gemini 3.1 tests for classifier strategy * fix(models): correctly filter active Gemini 3.1 models * fix(routing): ensure useCustomToolModel is only true when Gemini 3.1 is enabled * fix(test-utils): prevent double newline in lastFrame() on Windows * fix(test-utils): surgically fix double newline in lastFrame() on Windows * use custom_tools_model string for api key only * fix(ui): correct useCustomToolModel logic and update tests * fix(ui): correct useCustomToolModel logic in StatsDisplay * fix(routing): ensure test models are active and sync useCustomToolModel logic
* only use customtoolmodel for api key users * use displaystring for showing model in about
* feat(models): add support for Gemini 3.1 and custom tool models * test(routing): fix classifier and numerical classifier strategy tests * test(routing): add Gemini 3.1 tests for classifier strategy * fix(models): correctly filter active Gemini 3.1 models * fix(routing): ensure useCustomToolModel is only true when Gemini 3.1 is enabled * fix(test-utils): prevent double newline in lastFrame() on Windows * fix(test-utils): surgically fix double newline in lastFrame() on Windows * use custom_tools_model string for api key only * fix(ui): correct useCustomToolModel logic and update tests * fix(ui): correct useCustomToolModel logic in StatsDisplay * fix(routing): ensure test models are active and sync useCustomToolModel logic
* only use customtoolmodel for api key users * use displaystring for showing model in about
… to patch version v0.30.0-preview.3 and create version 0.30.0-preview.4 (google-gemini#20040) Co-authored-by: Sehoon Shon <sshon@google.com>
…S] (google-gemini#20039) Co-authored-by: Sehoon Shon <sshon@google.com>
… to patch version v0.30.0-preview.4 and create version 0.30.0-preview.5 (google-gemini#20086) Co-authored-by: Sandy Tao <sandytao520@icloud.com>
…version v0.29.6 and create version 0.29.7 (google-gemini#20111) Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
… to patch version v0.30.0-preview.5 and create version 0.30.0-preview.6 (google-gemini#20112) Co-authored-by: Bryan Morgan <bryanmorgan@google.com>
Merge upstream google-gemini/gemini-cli v0.29.7 into aioncli. Key changes incorporated: - PromptProvider class refactoring (extracted from inline prompts) - HookSystem replacing messageBus-based hooks - AcknowledgedAgentsService for agent management - Updated dependencies across all packages Protected aioncli modifications preserved: - OpenAI adapter (openaiContentGenerator.ts) - Multi-model token limits (tokenLimits.ts) - Bedrock model definitions (models.ts) - Custom package identity (@office-ai/aioncli-core) - API key rotation logic in fallback handler
Merge upstream google-gemini/gemini-cli v0.30.0 into aioncli (141 commits). Key changes from v0.30.0: - LlmRole parameter added to generateContent/generateContentStream - supportsModernFeatures() replaces isPreviewModel() for model detection - isCustomModel() function for non-Gemini model identification - GEMINI_CLI env var identification for MCP server transports - getAuthTypeFromEnv() moved to core (kept our extended version in CLI) - Improved model-not-found error messages (VALID_GEMINI_MODELS check) - allowPlanMode replaces isPlanEnabled in approval mode cycling Protected aioncli modifications preserved: - OpenAI adapter (openaiContentGenerator.ts) - Multi-model token limits (tokenLimits.ts) - Bedrock model definitions (models.ts) - Custom package identity (@office-ai/aioncli-core) - Extended getAuthTypeFromEnv with Bedrock/OpenAI detection - baseLlmClient.ts generateJsonForOpenAI with LlmRole support
Upstream v0.30.0 added a `role: LlmRole` third parameter to the ContentGenerator interface's generateContent/generateContentStream methods. Our three custom adapters (OpenAI, Bedrock, Anthropic) were missing this parameter. While TypeScript allows fewer params in implementations, this fix ensures interface conformance and consistency. Also fixes OpenRouter header test to match aioncli branding values. Note: Pre-existing ESLint no-unsafe-type-assertion warnings in adapter files are deferred - they predate this change and require a separate cleanup effort.
…del auth When OPENAI_API_KEY, ANTHROPIC_API_KEY, or AWS credentials are set via environment variables, they now take priority over the stored Google OAuth selectedType in non-interactive mode. This fixes the issue where users with cached Google OAuth credentials could not use alternate providers via env vars without first clearing their settings. Also adds ANTHROPIC_API_KEY detection and validation in the non-interactive auth flow, completing the multi-model auth support.
Update aioncli-analysis.md with detailed two-sided comparison tables for all 6 key conflict files resolved during the v0.29.7 and v0.30.0 upstream merges, plus post-merge fix documentation. Also fix unhandled 'exit' ShellOutputEvent in shell.ts.
ab76144 to
2b8e763
Compare
halvee-tech
approved these changes
Feb 27, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
LlmRoleparameter to all custom ContentGenerator adapters (OpenAI, Anthropic, Bedrock) to match the updated interfaceOPENAI_API_KEY,ANTHROPIC_API_KEY, etc.) now take priority over stored Google OAuth settingsANTHROPIC_API_KEYdetection and validation in the auth flowKey upstream changes (v0.30.0)
LlmRoleparameter added togenerateContent/generateContentStreaminterfacesupportsModernFeatures()replacesisPreviewModel()for model detectionisCustomModel()function for non-Gemini model identificationgetAuthTypeFromEnv()moved to core (kept our extended version in CLI with Bedrock/OpenAI/Anthropic support)allowPlanModereplacesisPlanEnabledin approval mode cyclingConflict resolution strategy
65 files had changes from both sides. Key resolutions:
contentGenerator.tsUSE_OPENAI/ANTHROPIC/BEDROCKenums, extendedContentGeneratorConfigLlmRoleparam,getAuthTypeFromEnv()baseLlmClient.tsgenerateJsonForOpenAI()(~120 lines)role: LlmRolethreadingrolevalidateNonInterActiveAuth.tsgetAuthTypeFromEnv()models.tsisCustomModel(),supportsModernFeatures()package.json@office-ai/aioncli-corename,openai/@anthropic-ai/sdk/@aws-sdkdepsTest results
AnthropicContentGeneratorcorrectlyTest plan
npm run buildpassesOPENAI_API_KEYOPENAI_API_KEYANTHROPIC_API_KEYfile:link integration compiles (0 TS errors after adaptation)Generated with Claude Code