Releases: BlockRunAI/ClawRouter
v0.10.18 — Re-pin session to fallback model after provider failure
Problem
v0.10.17 pinned the session to the routing decision (primary model), not the actual model used. When the primary was rate-limited and fell back to gemini-flash:
- Session pinned to kimi-k2.5 (the routing decision)
- Next request: session says kimi-k2.5 → rate-limited again → falls back to gemini-flash again
- Conversation stayed stuck in a retry-and-fallback loop
Fix
After the fallback loop resolves `actualModelUsed`, update the session pin to the actual responding model:
```
Request 1: routes → kimi-k2.5 (pinned) → rate-limited → falls back to gemini-flash
session updated: kimi-k2.5 → gemini-flash
Request 2: session says gemini-flash → used directly, no retry
Request 3: session says gemini-flash → stable
```
The conversation now stays on the working model for the full 30-minute session window.
Upgrade
```bash
~/.blockrun/scripts/update.sh
```
v0.10.17 — Session persistence fix (no more model jumping)
Problem
Users in group chats reported models constantly jumping back to `gemini-flash`, even after setting a primary model in the dashboard. Each message was being re-routed from scratch.
Root Cause
Two bugs working together:
- Session persistence was disabled by default — `SessionStore` had `enabled: false`, so model pinning never activated
- No session ID without the header — `getSessionId()` only read the `x-session-id` header. OpenClaw sends no such header, so every request got `undefined` → full re-routing on every turn
Fix
- `DEFAULT_SESSION_CONFIG.enabled`: `false` → `true`
- Added `deriveSessionId()`: stable 8-char hex derived from SHA-256 of the first user message — same opening message anchors the same session across all turns
- `proxy.ts`: `getSessionId(headers) ?? deriveSessionId(parsedMessages)`
Now model selection is pinned for 30 minutes per conversation without requiring any client-side session header.
Upgrade
```bash
~/.blockrun/scripts/update.sh
```
v0.10.15 — Root-cause fix: toolCalling capability flag prevents routing tool requests to incompatible models
feat: toolCalling capability flag + routing filter
Why v0.10.14 was directionally correct but incomplete
v0.10.14 removed grok-code-fast-1 from tier configs. That stopped the immediate bleeding, but the root cause remained: ClawRouter had no mechanism to prevent incompatible models from entering the routing chain when a request includes tool schemas. Any future model added to tier configs with the same issue would cause the same bug.
What v0.10.15 adds
A toolCalling capability flag on every model definition and a routing filter that enforces it.
models.ts — new flag on BlockRunModel:
toolCalling?: boolean
// true → supports OpenAI-compatible structured function/tool calling
// false/omitted → outputs tool invocations as plain text (skipped when request has tools)Models WITHOUT toolCalling:
xai/grok-code-fast-1— outputs{"command":"..."}as plain text (the original bug)nvidia/gpt-oss-120b— free model, structured function call support unverified
selector.ts — new filterByToolCalling() function:
- When request has
tools, removes models withouttoolCalling: truefrom the fallback chain - If no capable model remains, returns the full chain (API error beats silent failure)
proxy.ts — applies the filter after context-window filtering:
[ClawRouter] Tool-calling filter: excluded xai/grok-code-fast-1 (no structured function call support)
Upgrade
openclaw plugins update clawrouter
# or
openclaw plugins install @blockrun/clawrouter@0.10.15v0.10.14 — Fix 'talking to itself' bug (remove grok-code-fast-1 from tool-use paths)
Fix: Replace grok-code-fast-1 with kimi-k2.5 in MEDIUM routing tiers
Problem
Multiple users reported that ClawRouter was "talking to itself" — showing raw JSON like {"command":"..."} as visible chat messages instead of silently executing tool calls. A large error block would sometimes appear in the chat as a result.
Root cause: xai/grok-code-fast-1 (Grok Code Fast) does not properly handle OpenClaw's function call format. When given tool schemas, it outputs tool invocations as plain text JSON in the response body rather than as structured function calls. OpenClaw displays this raw JSON as a visible message, making it look like the AI is narrating its own actions.
Fix
Replaced grok-code-fast-1 with moonshot/kimi-k2.5 as the primary model for MEDIUM-complexity tasks in both the default and agentic tier configs. Kimi K2.5 has strong tool-use support and correctly uses OpenClaw's structured function call format.
| Config path | Before | After |
|---|---|---|
tiers.MEDIUM.primary |
xai/grok-code-fast-1 |
moonshot/kimi-k2.5 |
agenticTiers.MEDIUM.primary |
xai/grok-code-fast-1 |
moonshot/kimi-k2.5 |
premiumTiers.SIMPLE.fallback[2] |
xai/grok-code-fast-1 |
deepseek/deepseek-chat |
grok-code-fast-1 is no longer used in any routing path.
Upgrade
openclaw plugins update clawrouter
# or
openclaw plugins install @blockrun/clawrouter@0.10.14v0.10.13 — Fix async plugin registration (issue #56)
Fix: make register() synchronous for OpenClaw compatibility
Reported by: @KevinkeVarson in #56 — thank you for the detailed reproduction steps and version bisect!
Problem
After upgrading from 0.10.10 to 0.10.12, OpenClaw logs:
[gateway] [plugins] plugin register returned a promise; async registration is ignored (plugin=clawrouter, source=~/.openclaw/extensions/clawrouter/dist/index.js)
Routing never initializes — blockrun/auto and all BlockRun models stop working silently.
Root Cause
In 0.10.12, a dynamic await import('./partners/index.js') inside register() turned the entire function async, causing it to return a Promise. OpenClaw's plugin loader does not await plugin registration — it discards the promise and skips all initialization that follows.
Fix
Converted the dynamic await import() to a static top-level import, removing the only await in register(). The function is now fully synchronous. All other async work (proxy startup, wallet initialization, command registration) was already fire-and-forget via .then() and is unaffected.
Files Changed
| File | Change |
|---|---|
src/index.ts |
Add static import for partners/index.js; remove async from register(); remove dynamic await import() |
package.json |
Version bump 0.10.12 → 0.10.13 |
Upgrade
openclaw plugins update clawrouter
# or
openclaw plugins install @blockrun/clawrouter@0.10.13Fixes #56
v0.10.12 — Fix system prompt scoring (issue #50)
Fix: Score all keyword dimensions against user text only
Reported by: @Machiel692 in #50 — thank you for the exceptionally detailed bug report and follow-up testing!
Problem
When ClawRouter is used as an OpenClaw plugin, the system prompt (~6,000 tokens with 20+ tool definitions) contains keywords that match nearly every scoring dimension. This caused all requests to score identically (~0.47) regardless of user intent, making blockrun/auto routing completely non-functional.
Root Cause
13 of 15 scoring dimensions in classifyByRules() scored against the concatenated system prompt + user message. The user's actual query (<1% of scored text) had no measurable impact on the score.
Fix
Changed all keyword-based scoring dimensions to score against userText only (the user's message), matching the pattern already established for reasoningMarkers and scoreAgenticTask. The tokenCount dimension still uses total context size since that legitimately affects model selection.
Before vs After
| Query | Score Before | Score After |
|---|---|---|
| "What time is it?" | ~0.47 | 0.080 |
| "What's the weather?" | ~0.47 | 0.080 |
| Complex coding task | ~0.47 | 0.182 |
| Math proof (reasoning) | ~0.47 | 0.260 |
Scores now differentiate properly across query complexity levels.
Testing
- 214 unit tests pass
- 40 e2e tests pass (including new OpenClaw-scale system prompt scenario)
Files Changed
| File | Change |
|---|---|
src/router/rules.ts |
Score all keyword dimensions against userText only |
test/e2e.ts |
Add OpenClaw-scale system prompt e2e test |
package.json |
Version bump 0.10.11 → 0.10.12 |
Fixes #50
v0.10.11
What's New
Routing Debug Headers
Added x-clawrouter-* response headers that expose the full routing decision for every request — making it easy to see exactly which profile, tier, model, and scoring was used.
Debug mode is ON by default. To disable, send x-clawrouter-debug: false header.
Non-streaming responses
Returns these HTTP response headers:
| Header | Example | Description |
|---|---|---|
x-clawrouter-profile |
eco, auto, premium |
Which routing profile was used |
x-clawrouter-tier |
SIMPLE, MEDIUM, COMPLEX, REASONING |
Complexity tier assigned |
x-clawrouter-model |
anthropic/claude-sonnet-4.6 |
Actual model that served the request |
x-clawrouter-confidence |
0.85 |
Router confidence in tier assignment (0-1) |
x-clawrouter-reasoning |
score=0.12 | reasoning_keywords(2) | agentic |
Full scoring breakdown |
x-clawrouter-agentic-score |
0.60 |
Agentic task detection score (0-1, triggers agentic tiers at ≥0.5) |
Streaming responses
Emits an SSE comment before data starts (invisible to standard SSE clients, visible in raw stream):
: x-clawrouter-debug profile=auto tier=REASONING model=xai/grok-4-1-fast-reasoning agentic=0.00 confidence=0.97 reasoning=score=0.10 | short (15 tokens), reasoning (prove, step by step)
Usage
# Debug ON (default) — headers included automatically
curl -v http://127.0.0.1:8402/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"blockrun/auto","messages":[{"role":"user","content":"hello"}]}'
# Debug OFF — no routing headers
curl http://127.0.0.1:8402/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-clawrouter-debug: false" \
-d '{"model":"blockrun/auto","messages":[{"role":"user","content":"hello"}]}'How routing works
ClawRouter routes requests through these steps:
- Profile selection — based on model name (
blockrun/eco,blockrun/auto,blockrun/premium) - Rule-based scoring — 14 weighted dimensions classify prompt complexity in <1ms
- Tier assignment —
SIMPLE→MEDIUM→COMPLEX→REASONING - Agentic detection — if agentic score ≥ 0.5, switches to agentic-optimized model tiers
- Model selection — picks cheapest capable model for the tier + profile
- Fallback chain — if primary model fails, tries fallbacks filtered by context window size
Bug Fixes
- Fixed Gemini API "function call turn" ordering errors in upstream BlockRun API
- Fixed prettier formatting across all source files
- Removed unused imports in
doctor.ts
Internal
- Added
agenticScoretoRoutingDecisiontype for full observability pipeline - Fallback chain now correctly respects routing profile (eco/auto/premium) tier configs
v0.10.10 — Fix Auto Mode Routing to Sonnet
ClawRouter v0.10.9 — Fix Agentic Mode False Trigger (Auto Mode Routing to Sonnet)
Release Date: 2026-02-24
🐛 Bug Fix: blockrun/auto no longer routes all requests to Sonnet
Root cause: agenticScore was computed from systemPrompt + userPrompt combined text. Coding assistant system prompts (e.g., OpenClaw's) contain words like "edit files", "fix bugs", "check", "verify", "deploy", "make sure" — matching 3+ agentic keywords and triggering agentic mode (agenticScore ≥ 0.6) on every request, regardless of what the user actually asked.
In agentic mode, COMPLEX/REASONING tier routes to claude-sonnet-4.6, causing all queries to hit Sonnet.
Fix: agenticScore now only scores the user's prompt, not the system prompt. The system prompt describes how the assistant should behave — it should not influence whether the user is requesting a multi-step agentic task.
Behavior change
| Scenario | Before | After |
|---|---|---|
| "What is React?" (coding system prompt) | agentic mode → Sonnet | standard routing → kimi/grok |
| "What does this function do?" (coding system prompt) | agentic mode → Sonnet | standard routing → kimi |
| "Fix the bug, deploy, make sure it works" | agentic mode ✓ | agentic mode ✓ (unchanged) |
| User explicitly requests multi-step task | agentic mode ✓ | agentic mode ✓ (unchanged) |
📋 Files Changed
| File | Change |
|---|---|
src/router/rules.ts |
scoreAgenticTask uses userText instead of combined text |
test/e2e.ts |
Add regression tests for coding system prompt agentic false trigger |
package.json |
Version bump 0.10.8 → 0.10.9 |
🔢 Stats
- Tests: 214 unit passed + 36 e2e passed, 0 failed
v0.10.9 — Fix x402 Payment Verification Failures
ClawRouter v0.10.9 — Fix x402 Payment Verification Failures
Release Date: 2026-02-24
🐛 Bug Fixes: x402 Payment Reliability
Three root causes behind intermittent "Payment verification failed" errors, all in ClawRouter's local pre-processing logic. The BlockRun server was working correctly throughout.
Fix 1: Payment failures no longer silently surface to users
ClawRouter checks errors against PROVIDER_ERROR_PATTERNS to decide whether to retry with a fallback model. "Payment verification failed", "model not allowed", and "unknown model" were missing from this list — ClawRouter was surfacing these errors directly instead of triggering the fallback chain.
Added patterns:
/payment.*verification.*failed/i/model.*not.*allowed/i/unknown.*model/i
Fix 2: Pre-auth payment amount below CDP Facilitator minimum
ClawRouter pre-signs payment authorizations before sending requests (pre-auth optimization). The minimum signed amount was 100 micros ($0.0001), but CDP Facilitator enforces a minimum of 1000 micros ($0.001). Any request whose estimated cost fell below this threshold would be rejected by CDP with "signed_amount < required_amount", even with sufficient wallet balance.
Changed: Math.max(100, ...) → Math.max(1000, ...) in estimateAmount()
Fix 3: blockrun/moonshot model alias missing
blockrun/moonshot would strip to bare "moonshot", which had no alias mapping, causing ClawRouter to forward the unresolved string to the server. The server correctly rejected an unrecognized model ID.
Added aliases: moonshot and kimi-k2.5 → moonshot/kimi-k2.5
📋 Files Changed
| File | Change |
|---|---|
src/proxy.ts |
Add 3 missing patterns to PROVIDER_ERROR_PATTERNS; fix estimateAmount minimum |
src/models.ts |
Add moonshot and kimi-k2.5 aliases |
package.json |
Version bump 0.10.8 → 0.10.9 |
🔢 Stats
- E2E Tests: 33 passed, 0 failed
- Live payment test: ✓
moonshot/kimi-k2.5via x402 on Base mainnet
v0.10.8 — Fix Partner Tool OpenClaw API Contract
ClawRouter v0.10.8 — Fix Partner Tool OpenClaw API Contract
Release Date: 2026-02-24
🐛 Bug Fix: Partner Tools Now Work Correctly in OpenClaw
Three API contract mismatches between ClawRouter and OpenClaw's tool execution interface, discovered and reported by a user.
Bug 1: inputSchema → parameters
OpenClaw expects tool parameter schema under the key parameters, but ClawRouter was using inputSchema. The model had no way to know how to pass arguments.
Bug 2: execute(args) → execute(toolCallId, params)
OpenClaw calls execute(toolCallId, params) — the first argument is the tool call ID, the second is the actual parameters. ClawRouter's execute(args) was receiving toolCallId as the parameter object and sending it to the upstream API.
Bug 3: Return format
OpenClaw expects { content: [{ type: "text", text: "..." }], details: ... }. ClawRouter was returning raw JSON, which OpenClaw couldn't display.
📋 File Changed
| File | Change |
|---|---|
src/partners/tools.ts |
Fix all 3 OpenClaw tool API contract issues |
package.json |
Version bump 0.10.7 → 0.10.8 |
🔢 Stats
- Tests: 214 passed, 3 skipped, 0 failed