
 using a consistent OpenAI-compatible API format. Whether you're using OpenAI, Anthropic, Azure, Bedrock, Vertex AI, or any other provider, LiteLLM translates your requests and provides consistent responses.
LiteLLM is an AI Gateway (Proxy Server) that provides:
| Tag | Description |
|---|---|
main-stable | Latest stable release (recommended for production) |
main-latest | Latest development build |
v1.x.x-stable | Specific stable version |
v1.x.x | Specific version (may include pre-releases) |
Recommended: Use main-stable or version-specific -stable tags for production deployments.
docker run -p 4000:4000 \ -e OPENAI_API_KEY=your-openai-key \ berriai/litellm:main-stable \ --model gpt-4o### With Configuration File
docker run -p 4000:4000 \ -v $(pwd)/config.yaml:/app/config.yaml \ -e DATABASE_URL=postgresql://user:pass@host:5432/litellm \ berriai/litellm:main-stable \ --config /app/config.yaml
ml services: litellm: image: berriai/litellm:main-stable ports: - "4000:4000" environment: DATABASE_URL: postgresql://llmproxy:password@db:5432/litellm STORE_MODEL_IN_DB: "True" depends_on: - db db: image: postgres:16 environment: POSTGRES_DB: litellm POSTGRES_USER: llmproxy POSTGRES_PASSWORD: password## 🔧 Configuration
| Variable | Description | Required |
|---|---|---|
DATABASE_URL | PostgreSQL connection string | Yes (for production) |
LITELLM_MASTER_KEY | Master key for admin operations | Yes |
OPENAI_API_KEY | OpenAI API key | No (if using other providers) |
STORE_MODEL_IN_DB | Enable model management via UI | No |
LITELLM_LOG | Log level (ERROR, INFO, DEBUG) | No |
LITELLM_MODE | Set to "PRODUCTION" for production | No |
docker run berriai/litellm:main-stable
--port 4000 \ # Server port (default: 4000)
--config /app/config.yaml \ # Path to config file
--num_workers 4 \ # Number of worker processes
--run_gunicorn \ # Use Gunicorn instead of Uvicorn
--max_requests_before_restart *** # Worker recycling### Example Config File
Create config.yaml:
model_list:
general_settings: master_key: os.environ/LITELLM_MASTER_KEY database_connection_pool_limit: 10 proxy_batch_write_at: 60
router_settings: redis_host: os.environ/REDIS_HOST redis_port: os.environ/REDIS_PORT redis_password: os.environ/REDIS_PASSWORD
litellm_settings: cache: true cache_params: type: redis host: os.environ/REDIS_HOST port: os.environ/REDIS_PORT password: os.environ/REDIS_PASSWORD## 📚 Key Features
Call any LLM provider using OpenAI's format:
Access the web UI at http://localhost:4000/ui to:
pythonimport openai client = openai.OpenAI( api_key="your-virtual-key", # Created via admin UI base_url="http://localhost:4000" ) response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}] ) print(response.choices[0].message.content)## 🛠️ Production Deployment
For production deployments, see our Production Guide which covers:
See Benchmarks for detailed performance metrics.
LiteLLM is licensed under the MIT License. See https://github.com/BerriAI/litellm/blob/main/LICENSE for details.
Made with ❤️ by BerriAI
以下是 ametnes/litellm 相关的常用 Docker 镜像,适用于 不同场景 等不同场景:
您可以使用以下命令拉取该镜像。请将 <标签> 替换为具体的标签版本。如需查看所有可用标签版本,请访问 标签列表页面。






探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
docker search 限制
站内搜不到镜像
离线 save/load
插件要用 plugin install
WSL 拉取慢
安全与 digest
新手拉取配置
镜像合规机制
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
域名连通性排查
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务