A TypeScript proxy server that implements the Claude API and forwards requests to configurable backend providers like Qwen, OpenAI, or any OpenAI-compatible API.
- Claude API Compatible: Implements the Anthropic Claude API (
/v1/messages) - Multiple Providers: Supports Qwen (DashScope) and OpenAI out of the box
- Extensible: Easy to add new providers
- Streaming Support: Full SSE streaming with proper event transformation
- Tool Use: Supports function/tool calling
- Model Mapping: Automatically maps Claude model names to provider models
npm installCopy the example environment file and configure your providers:
cp .env.example .env| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 3000 |
HOST |
Server host | 0.0.0.0 |
DEFAULT_PROVIDER |
Default provider to use | qwen |
PROXY_API_KEY |
API key for authenticating requests (optional) | - |
QWEN_API_KEY |
Qwen/DashScope API key | - |
QWEN_BASE_URL |
Qwen API base URL | https://dashscope.aliyuncs.com/compatible-mode/v1 |
OPENAI_API_KEY |
OpenAI API key | - |
OPENAI_BASE_URL |
OpenAI API base URL | https://api.openai.com/v1 |
MODEL_MAPPINGS |
JSON object mapping Claude models to provider models | See below |
LOG_LEVEL |
Logging level (debug, info, warn, error) | info |
Default model mappings:
{
"claude-sonnet-4-5-20250514": "qwen-plus",
"claude-sonnet-4-20250514": "qwen-plus",
"claude-3-5-sonnet-20241022": "qwen-plus",
"claude-3-5-haiku-20241022": "qwen-turbo",
"claude-3-opus-20240229": "qwen-max",
"claude-3-sonnet-20240229": "qwen-plus",
"claude-3-haiku-20240307": "qwen-turbo"
}npm run devnpm run build
npm startConfigure Claude Code to use this proxy:
export ANTHROPIC_BASE_URL=http://localhost:3000
export ANTHROPIC_API_KEY=your-proxy-api-key # If PROXY_API_KEY is setClaude Messages API compatible endpoint.
Headers:
Content-Type: application/jsonx-api-key: <your-api-key>(if PROXY_API_KEY is configured)anthropic-version: 2023-06-01(optional)
Request Body:
{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello!"}
],
"stream": true
}Health check endpoint.
- Create a new provider class in
src/providers/:
import { BaseProvider } from './base.js';
export class MyProvider extends BaseProvider {
readonly name = 'myprovider';
async createCompletion(request) {
// Implementation
}
async *createStreamingCompletion(request) {
// Implementation
}
}
export function createMyProvider(config) {
return new MyProvider(config);
}- Register the provider in
src/providers/index.ts:
import { createMyProvider } from './myprovider.js';
providerRegistry.register('myprovider', createMyProvider);- Add configuration in
.env:
MYPROVIDER_API_KEY=your-key
MYPROVIDER_BASE_URL=https://api.example.com/v1
src/
├── index.ts # Entry point
├── server.ts # Express server setup
├── types/ # TypeScript type definitions
│ ├── claude.ts # Claude API types
│ ├── openai.ts # OpenAI-compatible types
│ └── common.ts # Shared types
├── providers/ # Provider implementations
│ ├── base.ts # Base provider class
│ ├── registry.ts # Provider registry
│ ├── qwen.ts # Qwen provider
│ └── openai.ts # OpenAI provider
├── transformers/ # Request/response transformers
│ ├── request/ # Claude → OpenAI
│ ├── response/ # OpenAI → Claude
│ └── streaming/ # Stream transformation
├── routes/ # Route handlers
│ └── messages.ts # /v1/messages handler
├── middleware/ # Express middleware
│ ├── auth.ts # Authentication
│ └── error-handler.ts # Error handling
└── utils/ # Utilities
├── config.ts # Configuration
└── logger.ts # Logging
MIT