
justlikemaki/aiclient-2-apiA powerful proxy that can unify the requests of various large model APIs (Gemini CLI, Qwen Code Plus, Kiro Claude...) that are only used within the client into a local OpenAI compatible interface.
 into standard OpenAI-compatible interfaces that can be called by any application. Built on Node.js, it supports intelligent conversion between three major protocols (OpenAI, Claude, Gemini), enabling tools like Cherry-Studio, NextChat, and Cline to freely use advanced models such as Claude Sonnet 4.5, Gemini 2.5 Flash, and Qwen3 Coder Plus at scale. The project adopts a modular architecture based on strategy and adapter patterns, with built-in account pool management, intelligent polling, automatic failover, and health check mechanisms, ensuring 99.9% service availability.
[!NOTE] 🎉 Important Milestone
- Thanks to Ruan Yifeng for the recommendation in Weekly Issue 359
📅 Version Update Log
- 2025.11.30 - Added Antigravity protocol support, enabling access to Gemini 3 Pro, Claude Sonnet 4.5, and other models via Google internal interfaces
- 2025.11.16 - Added Ollama protocol support, unified interface to access all supported models (Claude, Gemini, Qwen, OpenAI, etc.)
- 2025.11.11 - Added Web UI management console, supporting real-time configuration management and health status monitoring
- 2025.11.06 - Added support for Gemini 3 Preview, enhanced model compatibility and performance optimization
- 2025.10.18 - Kiro open registration, new accounts get 500 credits, full support for Claude Sonnet 4.5
- 2025.09.01 - Integrated Qwen Code CLI, added
qwen3-coder-plusmodel support- 2025.08.29 - Released account pool management feature, supporting multi-account polling, intelligent failover, and automatic degradation strategies
- Configuration: Add
PROVIDER_POOLS_FILE_PATHparameter in config.json- Reference configuration: provider_pools.json
claude-custom or claude-kiro-oauth providersgemini-cli-oauth providergemini-cli-oauth provideropenai-custom or openai-qwen-oauth providersThis project supports multiple model providers through different protocols. The following is an overview of their relationships:
openai-custom, gemini-cli-oauth, claude-custom, claude-kiro-oauth, openai-qwen-oauth, and openaiResponses-custom model providers.claude-custom, claude-kiro-oauth, gemini-cli-oauth, openai-custom, openai-qwen-oauth, and openaiResponses-custom model providers.gemini-cli-oauth model provider.Detailed relationship diagram:
mermaidgraph TD subgraph Core_Protocols["Core Protocols"] P_OPENAI[OpenAI Protocol] P_GEMINI[Gemini Protocol] P_CLAUDE[Claude Protocol] end subgraph Supported_Model_Providers["Supported Model Providers"] MP_OPENAI[openai-custom] MP_GEMINI[gemini-cli-oauth] MP_CLAUDE_C[claude-custom] MP_CLAUDE_K[claude-kiro-oauth] MP_QWEN[openai-qwen-oauth] MP_OPENAI_RESP[openaiResponses-custom] end P_OPENAI ---|Support| MP_OPENAI P_OPENAI ---|Support| MP_QWEN P_OPENAI ---|Support| MP_GEMINI P_OPENAI ---|Support| MP_CLAUDE_C P_OPENAI ---|Support| MP_CLAUDE_K P_OPENAI ---|Support| MP_OPENAI_RESP P_GEMINI ---|Support| MP_GEMINI P_CLAUDE ---|Support| MP_CLAUDE_C P_CLAUDE ---|Support| MP_CLAUDE_K P_CLAUDE ---|Support| MP_GEMINI P_CLAUDE ---|Support| MP_OPENAI P_CLAUDE ---|Support| MP_QWEN P_CLAUDE ---|Support| MP_OPENAI_RESP style P_OPENAI fill:#f9f,stroke:#333,stroke-width:2px style P_GEMINI fill:#ccf,stroke:#333,stroke-width:2px style P_CLAUDE fill:#cfc,stroke:#333,stroke-width:2px
The easiest way to get started with AIClient-2-API is to use our automated installation and startup scripts. We provide both Linux/macOS and Windows versions:
bash# Make the script executable and run it chmod +x install-and-run.sh ./install-and-run.sh
cmd# Run the batch file install-and-run.bat
The install-and-run script automatically:
node_modules doesn't existhttp://localhost:3000======================================== AI Client 2 API Quick Install Script ======================================== [Check] Checking if Node.js is installed... ✅ Node.js is installed, version: v20.10.0 ✅ Found package.json file ✅ node_modules directory already exists ✅ Project file check completed ======================================== Starting AI Client 2 API Server... ======================================== 🌐 Server will start on http://localhost:3000 📖 Visit http://localhost:3000 to view management interface ⏹️ Press Ctrl+C to stop server
💡 Tip: The script will automatically install dependencies and start the server. If you encounter any issues, the script provides clear error messages and suggested solutions.
!Web UI
A comprehensive web-based management interface offering:
📊 Dashboard: System overview, interactive routing examples, and client configuration guides
⚙️ Configuration: Real-time parameter modification for all providers (Gemini, OpenAI, Claude, Kiro, Qwen) with advanced settings and file upload support
🔗 Provider Pools: Monitor active connections, provider health statistics, and enable/disable providers
📁 Config Files: Centralized OAuth credential management with search, filtering, and file operations
📜 Logs: Real-time system and request logs with management controls
🔐 Login: Authentication required (default: admin123, modify via pwd file)
Access: http://localhost:3000 → Login → Sidebar navigation → Immediate effect changes
This project is fully compatible with Model Context Protocol (MCP), enabling seamless integration with MCP-supporting clients for powerful functional extensions.
Supports various input types including images and documents, providing richer interactive experiences and more powerful application scenarios.
Seamlessly supports the following latest large models, simply configure the corresponding OpenAI or Claude compatible interface in config.json:
oauth_creds.json file will be automatically generated and saved to ~/.gemini directory--project-idoauth_creds.json file will be automatically generated and saved to ~/.qwen directoryjson{ "temperature": 0, "top_p": 1 }
kiro-auth-token.json credential filePROVIDER_POOLS_FILE_PATH in config.json to point to the pool configuration file--provider-pools-file <path> parameter to specify the pool configuration file pathThis project provides two flexible model switching methods to meet different usage scenario requirements.
Achieve instant switching by specifying provider identifier in API request path:
| Route Path | Description | Use Case |
|---|---|---|
/claude-custom | Use Claude API from config file | Official Claude API calls |
/claude-kiro-oauth | Access Claude via Kiro OAuth | Free use of Claude Sonnet 4.5 |
/openai-custom | Use OpenAI provider to handle requests | Standard OpenAI API calls |
/gemini-cli-oauth | Access via Gemini CLI OAuth | Break through Gemini free limits |
/openai-qwen-oauth | Access via Qwen OAuth | Use Qwen Code Plus |
/openaiResponses-custom | OpenAI Responses API | Structured dialogue scenarios |
/ollama | Ollama API protocol | Unified access to all supported models |
Usage Examples:
bash# Configure in programming agents like Cline, Kilo API_ENDPOINT=http://localhost:3000/claude-kiro-oauth # Direct API call curl http://localhost:3000/gemini-cli-oauth/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"gemini-2.0-flash-exp","messages":[...]}'
This project supports the Ollama protocol, allowing access to all supported models through a unified interface. The Ollama endpoint provides standard interfaces such as /api/tags, /api/chat, /api/generate, etc.
Ollama API Call Examples:
bashcurl http://localhost:3000/ollama/api/tags
bashcurl http://localhost:3000/ollama/api/chat \ -H "Content-Type: application/json" \ -d '{ "model": "[Claude] claude-sonnet-4.5", "messages": [ {"role": "user", "content": "Hello"} ] }'
[Kiro] - Access Claude models using Kiro API[Claude] - Use official Claude API[Gemini CLI] - Access via Gemini CLI OAuth[OpenAI] - Use official OpenAI API[Qwen CLI] - Access via Qwen OAuthDefault storage locations for authorization credential files of each service:
| Service | Default Path | Description |
|---|---|---|
| Gemini | ~/.gemini/oauth_creds.json | OAuth authentication credentials |
| Kiro | ~/.aws/sso/cache/kiro-auth-token.json | Kiro authentication token |
| Qwen | ~/.qwen/oauth_creds.json | Qwen OAuth credentials |
| Antigravity | ~/.antigravity/oauth_creds.json | Antigravity OAuth credentials |
Note:
~represents the user home directory (Windows:C:\Users\username, Linux/macOS:/home/usernameor/Users/username)Custom Path: Can specify custom storage location via relevant parameters in configuration file or environment variables
This project supports rich command-line parameter configuration, allowing flexible adjustment of service behavior as needed. The following is a detailed explanation of all startup parameters, displayed in functional groups:
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--host | string | localhost | Server listening address |
--port | number | 3000 | Server listening port |
--api-key | string | *** | API key for authentication |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--model-provider | string | gemini-cli-oauth | AI model provider, optional values: openai-custom, claude-custom, gemini-cli-oauth, claude-kiro-oauth, openai-qwen-oauth, openaiResponses-custom, gemini-antigravity |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--openai-api-key | string | null | OpenAI API key (required when model-provider is openai-custom) |
--openai-base-url | string | null | OpenAI API base URL (required when model-provider is openai-custom) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--claude-api-key | string | null | Claude API key (required when model-provider is claude-custom) |
--claude-base-url | string | null | Claude API base URL (required when model-provider is claude-custom) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--gemini-oauth-creds-base64 | string | null | Base64 string of Gemini OAuth credentials (optional when model-provider is gemini-cli-oauth, choose one with --gemini-oauth-creds-file) |
--gemini-oauth-creds-file | string | null | Gemini OAuth credentials JSON file path (optional when model-provider is gemini-cli-oauth, choose one with --gemini-oauth-creds-base64) |
--project-id | string | null | Google Cloud project ID (required when model-provider is gemini-cli-oauth) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--kiro-oauth-creds-base64 | string | null | Base64 string of Kiro OAuth credentials (optional when model-provider is claude-kiro-oauth, choose one with --kiro-oauth-creds-file) |
--kiro-oauth-creds-file | string | null | Kiro OAuth credentials JSON file path (optional when model-provider is claude-kiro-oauth, choose one with --kiro-oauth-creds-base64) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--qwen-oauth-creds-file | string | null | Qwen OAuth credentials JSON file path (required when model-provider is openai-qwen-oauth) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--antigravity-oauth-creds-file | string | null | Antigravity OAuth credentials JSON file path (optional when model-provider is gemini-antigravity) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--model-provider | string | openaiResponses-custom | Model provider, set to openaiResponses-custom when using OpenAI Responses API |
--openai-api-key | string | null | OpenAI API key (required when model-provider is openaiResponses-custom) |
--openai-base-url | string | null | OpenAI API base URL (required when model-provider is openaiResponses-custom) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--system-prompt-file | string | input_system_prompt.txt | System prompt file path |
--system-prompt-mode | string | overwrite | System prompt mode, optional values: overwrite (override), append (append) |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--log-prompts | string | none | Prompt log mode, optional values: console (console), file (file), none (none) |
--prompt-log-base-name | string | prompt_log | Prompt log file base name |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--request-max-retries | number | 3 | Maximum number of automatic retries when API requests fail |
--request-base-delay | number | 1000 | Base delay time (milliseconds) between automatic retries, delay increases after each retry |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--cron-near-minutes | number | 15 | Interval time (minutes) for OAuth token refresh task schedule |
--cron-refresh-token | boolean | true | Whether to enable automatic OAuth token refresh task |
| Parameter | Type | Default Value | Description |
|---|---|---|---|
--provider-pools-file | string | null | Provider account pool configuration file path |
bash# Basic usage node src/api-server.js # Specify port and API key node src/api-server.js --port 8080 --api-key my-secret-key # Use OpenAI provider node src/api-server.js --model-provider openai-custom --openai-api-key sk-xxx --openai-base-url [***] # Use Claude provider node src/api-server.js --model-provider claude-custom --claude-api-key sk-ant-xxx --claude-base-url [***] # Use OpenAI Responses API provider node src/api-server.js --model-provider openaiResponses-custom --openai-api-key sk-xxx --openai-base-url [***] # Use Gemini provider (Base64 credentials) node src/api-server.js --model-provider gemini-cli-oauth --gemini-oauth-creds-base64 eyJ0eXBlIjoi... --project-id your-project-id # Use Gemini provider (credentials file) node src/api-server.js --model-provider gemini-cli-oauth --gemini-oauth-creds-file /path/to/credentials.json --project-id your-project-id # Configure system prompt node src/api-server.js --system-prompt-file custom-prompt.txt --system-prompt-mode append # Configure logging node src/api-server.js --log-prompts console node src/api-server.js --log-prompts file --prompt-log-base-name my-logs # Configure Account Pool node src/api-server.js --provider-pools-file ./provider_pools.json # Complete example node src/api-server.js \ --host 0.0.0.0 \ --port 3000 \ --api-key my-secret-key \ --model-provider gemini-cli-oauth \ --project-id my-gcp-project \ --gemini-oauth-creds-file ./credentials.json \ --system-prompt-file ./custom-system-prompt.txt \ --system-prompt-mode overwrite \ --log-prompts file \ --prompt-log-base-name api-logs \ --provider-pools-file ./provider_pools.json
This project operates under the GNU General Public License v3 (GPLv3). For complete details, please refer to the LICENSE file located in the root directory.
The development of this project was significantly inspired by the official Google Gemini CLI and incorporated some code implementations from Cline 3.18.0's gemini-cli.ts. We extend our sincere gratitude to the official Google team and the Cline development team for their exceptional work!
Thanks to all the developers who contributed to the AIClient-2-API project:
![Contributors]([***]
![Star History Chart]([***]
manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务