feat: UI/UX enhancement for model call failures (#1213)#1219
feat: UI/UX enhancement for model call failures (#1213)#1219eureka928 wants to merge 14 commits intoeigent-ai:mainfrom
Conversation
|
@Wendong-Fan @Pakchoioioi @4pmtong @bytecii would you review my PR? |
83fb9b1 to
b0f8259
Compare
|
Hi @Douglasymlai would you review this PR? |
|
Hi @eureka928 , thanks for contributing this PR. Quick feedback on the UX for this issue: since an invalid model key blocks the user’s task, the error should be handled inside the chat flow (not just as a notification). It would be better to show an inline error message in the chat with an action, so when the model returns an “invalid key” error, the user can directly jump to Model Settings to fix it. Right now it presents the error information, but adding a clear CTA (for example, Open Model Settings) would make the flow much smoother. |
Does this work for you? Looking forward to hearing from you |
b702579 to
26168a1
Compare
|
@Douglasymlai @a7m-1st this is ready as well, please review 😉 |
26168a1 to
93a98ea
Compare
|
@Douglasymlai @4pmtong I think this is also good to merge? |
Douglasymlai
left a comment
There was a problem hiding this comment.
Everything looks good. @Wendong-Fan @a7m-1st can do the final check for the merging
I'd like to follow up on this |
|
let me focus on this in a moment ! |
93a98ea to
92de1c6
Compare
I fixed the conflict, would you review and merge? |
|
This redirect doesn’t meet the requirements; it should go to the model settings page. bug.mov |
92de1c6 to
e81b85e
Compare
|
@Pakchoioioi it's updated, that was old version Screencast.from.2026-03-05.01-31-50.webm |
|
Also, it would be helpful to keep the original error message from the model printed out as well. |
|
@Pakchoioioi |
|
@a7m-1st is this good to merge now? |
| "no-reply-received-task-continue": "No reply received, task continue", | ||
| "splitting-tasks": "Splitting Tasks", | ||
| "start-task": "Start Task", | ||
| "message-cannot-be-empty": "Message cannot be empty", | ||
| "remove-file": "Remove file", | ||
| "drop-files-to-attach": "Drop files to attach", | ||
| "expand-input": "Expand input (⌘P)", | ||
| "queued-tasks": "Queued Tasks", | ||
| "remove-queued-message": "Remove queued message", | ||
| "no-agents-added": "No agents added", | ||
| "open-in-ide": "Open in IDE", | ||
| "open-in-vscode": "Open in VS Code", | ||
| "open-in-cursor": "Open in Cursor", | ||
| "open-in-file-manager": "Open in File Manager", | ||
| "failed-to-open-folder": "Failed to open folder", | ||
| "task-completed-card-title": "Task completed", | ||
| "task-completed-card-subtitle": "Automate your task with a trigger" | ||
| "model-error-go-to-settings": "Model Settings", |
|
Hi @eureka928 Could you help resolve the conflicts? |
Use normalize_error_to_openai_format() to classify ModelProcessingError and generic Exception errors, including error_code in SSE payloads so the frontend can take targeted action (e.g. invalid_api_key, model_not_found, insufficient_quota).
Lightweight endpoint to mark a provider as invalid without requiring the full provider payload, used by the frontend when an API key error is detected during a model call.
Add PATCH method support to proxyFetchRequest and expose proxyFetchPatch. Add optional error_code field to AgentMessage.data for typed error handling in the frontend.
Show contextual toast on model call failures with a link to Model Settings. Persistent (Infinity duration) for invalid_api_key errors, 8s for others.
Capture preferredProviderId during chat start. On SSE error events, show model error toast and call the invalidation endpoint when error_code is invalid_api_key.
Fix is_valid field mapping to read is_vaild (legacy typo) from the API response. Fix save payloads to send is_vaild: 2 (VaildStatus.is_valid). Update sidebar, BYOK detail, and default model dropdown to show red indicators for invalid providers and warning styling on the dropdown trigger. Show inline API key error when provider is invalidated.
Add chat keys for model error toast messages (invalid_api_key, model_not_found, insufficient_quota) and setting keys for API key invalid warning and default model unavailable labels across all 11 locales.
- Add Action.error and ActionErrorData to queue system so workforce background tasks can push errors to the SSE stream - Fix "simple answer" error handlers to emit SSE error events instead of silently swallowing exceptions - Add PUT fallback when PATCH /invalidate endpoint is not deployed - Store preferredProvider reference for invalidation fallback Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Show model errors (invalid API key, model not found, quota exceeded) as an inline warning card in the chat message list instead of only a toast. The card includes a contextual description and an "Open Model Settings" button that navigates to /setting/models.
The "Open Model Settings" button navigated to /setting/models which redirected to the settings tab. The models page is actually under the agents tab at /history?tab=agents.
Display the raw error from the model provider below the i18n description so users have the full context for debugging.
Extract message from SDK .body attribute or parse embedded dict, and add break-all to prevent long error strings from overflowing.
6afc7b4 to
1957859
Compare
@4pmtong this is done, if you don't mind, would you review and merge today? |
|
Hi @eureka928 I noticed that quite a few configurations seem to be missing in the i18n files — it looks like some entries may have been lost during conflict resolution. Could you take another look at that part and make sure everything is intact? Thanks! I've attached a few screenshots above for reference, though they don't cover all the missing entries — there may be more affected areas worth checking. |
Restores setting.json keys (gpt-5.4-name, help-improve-eigent, etc.) and chat.json keys (queued-tasks, splitting-tasks, etc.) across all 11 locales that were dropped when resolving merge conflicts.
|
Good catch @4pmtong — these were lost during rebase conflict resolution. I've restored all missing keys:
Fixed in commit 6baf1b1. |
|
HI @Wendong-Fan @4pmtong @a7m-1st I addressed all comments |
|
@fengju0213 would you merge this PR? I addressed all and got approval |








Related Issue
Closes #1213
Description
Implements UI/UX improvements for model call failures to guide users toward fixing configuration issues, as described in #1213.
Backend:
error_code(invalid_api_key,model_not_found,insufficient_quota) using the existingnormalize_error_to_openai_format()classifierPATCH /provider/{id}/invalidateendpoint to mark a provider as invalid without sending full provider dataFrontend:
invalid_api_key, 8s for others)invalid_api_keyerrors, automatically call the invalidation endpoint to mark the provider invalid on the serveris_validfield mapping — the backend returnsis_vaild(legacy typo) but the frontend was readingis_valid, resulting in the green dot always being wrongis_vaild: 2(the correct field name and enum value)i18n:
Changes Made
backend/app/service/chat_service.pyerror_codeserver/app/controller/provider/provider_controller.pyPATCH /provider/{id}/invalidatesrc/api/http.tsproxyFetchPatchhelper,PATCHmethod supportsrc/types/chatbox.d.tserror_codetoAgentMessage.datasrc/components/Toast/modelErrorToast.tsxsrc/store/chatStore.tssrc/pages/Setting/Models.tsxis_validmapping, update status dots, add warningssrc/i18n/locales/*/chat.jsonsrc/i18n/locales/*/setting.jsonTesting Evidence (REQUIRED)
What is the purpose of this pull request?
Contribution Guidelines Acknowledgement