本站支持搜索的镜像仓库:Docker Hub、gcr.io、ghcr.io、quay.io、k8s.gcr.io、registry.gcr.io、elastic.co、mcr.microsoft.com

 with a tiny footprint (~45kb installed)
✅ Load balance across multiple models, providers, and keys
✅ Fallbacks make sure your app stays resilient
✅ Automatic Retries with exponential fallbacks come by default
✅ Plug-in middleware as needed
✅ Battle tested over 100B tokens
Run using Docker directly:
docker run -d -p 8787:8787 portkeyai/gateway:latest
For more information on the Docker image, check here
wget "[***]"
docker compose up -d
Let's try making a chat completions call to OpenAI through the AI gateway:
curl '127.0.0.1:8787/v1/chat/completions' \ -H 'x-portkey-provider: openai' \ -H "Authorization: Bearer $OPENAI_KEY" \ -H 'Content-Type: application/json' \ -d '{"messages": [{"role": "user","content": "Say this is test."}], "max_tokens": 20, "model": "gpt-4"}'
Full list of supported SDKs
| Provider | Support | Stream | Supported Endpoints | |
|---|---|---|---|---|
| OpenAI | ✅ | ✅ | /completions, /chat/completions,/embeddings, /assistants, /threads, /runs, /images/generations, /audio/* | |
| Azure OpenAI | ✅ | ✅ | /completions, /chat/completions,/embeddings | |
| Anyscale | ✅ | ✅ | /chat/completions | |
| Google Gemini & Palm | ✅ | ✅ | /generateMessage, /generateText, /embedText | |
| Anthropic | ✅ | ✅ | /messages, /complete | |
| Cohere | ✅ | ✅ | /generate, /embed, /rerank | |
| Together AI | ✅ | ✅ | /chat/completions, /completions, /inference | |
| Perplexity | ✅ | ✅ | /chat/completions | |
| Mistral | ✅ | ✅ | /chat/completions, /embeddings | |
| Nomic | ✅ | ✅ | /embeddings | |
| AI21 | ✅ | ✅ | /complete, /chat, /embed | |
| Stability AI | ✅ | ✅ | /generation/{engine_id}/text-to-image | |
| DeepInfra | ✅ | ✅ | /inference | |
| Ollama | ✅ | ✅ | /chat/completions |
View the complete list of 100+ supported models here
Unified API SignatureConnect with 100+ LLM using OpenAI's API signature. The AI gateway handles the request, response and error transformations so you don't have to make any changes to your code. You can use the OpenAI SDK itself to connect to any of the supported LLMs. |
FallbackDon't let failures stop you. The Fallback feature allows you to specify a list of Language Model APIs (LLMs) in a prioritized order. If the primary LLM fails to respond or encounters an error, Portkey will automatically fallback to the next LLM in the list, ensuring your application's robustness and reliability. |
Automatic RetriesTemporary issues shouldn't mean manual re-runs. AI Gateway can automatically retry failed requests upto 5 times. We apply an exponential backoff strategy, which spaces out retry attempts to prevent network overload. |
Load BalancingDistribute load effectively across multiple API keys or providers based on custom weights. This ensures high availability and optimal performance of your generative AI apps, preventing any single LLM from becoming a performance bottleneck. |
| Language | Supported SDKs |
|---|---|
| Node.js / JS / TS | Portkey SDK OpenAI SDK LangchainJS LlamaIndex.TS |
| Python | Portkey SDK OpenAI SDK Langchain LlamaIndex |
| Go | go-openai |
| Java | openai-java |
| Rust | async-openai |
| Ruby | ruby-openai |
Join our growing community around the world, for help, ideas, and discussions on AI.
!Rubeus Social Share (4)


免费版仅支持 Docker Hub 加速,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
免费版仅支持 docker.io;专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录方式配置轩辕镜像加速服务,包含7个详细步骤
在 Linux 系统上配置轩辕镜像源,支持主流发行版
在 Docker Desktop 中配置轩辕镜像加速,适用于桌面系统
在 Docker Compose 中使用轩辕镜像加速,支持容器编排
在 k8s 中配置 containerd 使用轩辕镜像加速
在宝塔面板中配置轩辕镜像加速,提升服务器管理效率
在 Synology 群晖NAS系统中配置轩辕镜像加速
在飞牛fnOS系统中配置轩辕镜像加速
在极空间NAS中配置轩辕镜像加速
在爱快ikuai系统中配置轩辕镜像加速
在绿联NAS系统中配置轩辕镜像加速
在威联通NAS系统中配置轩辕镜像加速
在 Podman 中配置轩辕镜像加速,支持多系统
配置轩辕镜像加速9大主流镜像仓库,包含详细配置步骤
无需登录即可使用轩辕镜像加速服务,更加便捷高效
需要其他帮助?请查看我们的 常见问题 或 官方QQ群: 13763429