
Claude Code in a Docker container. No host installs. No permission nightmares. Just vibes and --dangerously-skip-permissions. Use it as a CLI, HTTP API, OpenAI-compatible endpoint, MCP server, or *** bot.
Four modes, five interfaces:
claude CLI replacement, persistent container, picks up where you left offchat/completions endpoint for LiteLLM, OpenAI SDKs, and anything that speaks OpenAI~/.claude/bin)~/.claude/init.d)~/.claude/.always-skills)Because installing things natively is for people who enjoy ***.
This image exists so you can run Claude Code in a fully isolated container with every tool known to humankind pre-installed, passwordless sudo, docker-in-docker, and zero concern for your host system's wellbeing. It's like giving an AI a padded room with power tools.
Pick your poison:
latest (full) — the kitchen sinkEverything pre-installed. Go, Python, Node, C/C++, Terraform, kubectl, database clients, linters, formatters, the works. Big image, zero wait time. Claude wakes up and gets to work immediately.
bashcurl -fsSL https://raw.githubusercontent.com/psyb0t/docker-claude-code/master/install.sh | bash
latest-minimal — diet modeJust enough to run Claude: Ubuntu, git, curl, Node.js, Docker. Claude has passwordless sudo so it'll install whatever it needs on the fly. Smaller pull, but first run takes longer while Claude figures out its life choices.
bashCLAUDE_MINIMAL=1 curl -fsSL https://raw.githubusercontent.com/psyb0t/docker-claude-code/master/install.sh | bash
Pro tip: use ~/.claude/init.d/*.sh hooks to pre-install your tools on first container create instead of waiting for Claude to apt-get its way through life.
latest (full) | latest-minimal | |
|---|---|---|
| Ubuntu 22.04 | yes | yes |
| git, curl, wget, jq | yes | yes |
| Node.js LTS + npm | yes | yes |
| Docker CE + Compose | yes | yes |
| Claude Code CLI | yes | yes |
| Go 1.26.1 + tools | yes | - |
| Python 3.12.11 + tools | yes | - |
| Node.js dev tools | yes | - |
| C/C++ tools | yes | - |
| DevOps (terraform, kubectl, helm, gh) | yes | - |
| Database clients | yes | - |
| Shell utilities (ripgrep, bat, etc.) | yes | - |
The full image is a buffet of dev tools. Here's what Claude gets to play with:
Languages & runtimes:
DevOps & infra:
Databases:
Shell & system:
Magic under the hood:
CLAUDE.md in workspace listing all available tools (so Claude knows what it has)--update)~/.claude/bin (in PATH automatically)~/.claude/init.d/*.sh (run once on first container create)--continue / --no-continue / --resume <session_id>DEBUG=true) with timestamps everywherebash# full image (recommended) curl -fsSL https://raw.githubusercontent.com/psyb0t/docker-claude-code/master/install.sh | bash # minimal image CLAUDE_MINIMAL=1 curl -fsSL https://raw.githubusercontent.com/psyb0t/docker-claude-code/master/install.sh | bash # custom binary name (if you already have a native `claude` install) curl -fsSL https://raw.githubusercontent.com/psyb0t/docker-claude-code/master/install.sh | bash -s -- dclaude # or: CLAUDE_BIN_NAME=dclaude curl -fsSL .../install.sh | bash
If you don't trust piping scripts to bash (understandable):
bash# 1. create dirs mkdir -p ~/.claude mkdir -p "$HOME/.ssh/claude-code" # 2. generate SSH keys (for git push/pull inside the container) ssh-keygen -t ed25519 -C "claude@claude.ai" -f "$HOME/.ssh/claude-code/id_ed25519" -N "" # then add the pubkey to GitHub/GitLab/wherever # 3. pull docker pull psyb0t/claude-code:latest # or: docker pull psyb0t/claude-code:latest-minimal # 4. check install.sh for how the wrapper script works and wire it up yourself
Set these on your host (e.g. ~/.bashrc). Apply to all modes — the wrapper forwards them to the container.
| Variable | What it does | Default |
|---|---|---|
ANTHROPIC_API_KEY | API key for authentication | (none) |
CLAUDE_CODE_OAUTH_TOKEN | OAuth token for authentication | (none) |
CLAUDE_GIT_NAME | Git commit name inside the container | (none) |
CLAUDE_GIT_EMAIL | Git commit email inside the container | (none) |
CLAUDE_DATA_DIR | Custom .claude data directory | ~/.claude |
CLAUDE_SSH_DIR | Custom SSH key directory | ~/.ssh/claude-code |
CLAUDE_INSTALL_DIR | Custom install path for the wrapper (install-time only) | /usr/local/bin |
CLAUDE_BIN_NAME | Custom binary name (install-time only) | claude |
CLAUDE_ENV_* | Forward custom env vars (prefix is stripped: CLAUDE_ENV_FOO=bar → FOO=bar) | (none) |
CLAUDE_MOUNT_* | Mount extra volumes (path = same in container, or src:dest) | (none) |
DEBUG | Enable debug logging with timestamps | (none) |
Authentication
Either log in interactively or set up a token:
bash# one-time interactive OAuth setup claude setup-token # then use the token for programmatic/headless runs CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-xxx claude "do stuff" # or just use an API key ANTHROPIC_API_KEY=sk-ant-api03-xxx claude "do stuff"
Forwarding env vars
The CLAUDE_ENV_ prefix lets you inject arbitrary env vars into the container. The prefix gets stripped:
bash# inside the container: GITHUB_TOKEN=xxx, MY_VAR=hello CLAUDE_ENV_GITHUB_TOKEN=xxx CLAUDE_ENV_MY_VAR=hello claude "do stuff"
Extra volume mounts
The CLAUDE_MOUNT_ prefix mounts additional directories:
bashCLAUDE_MOUNT_DATA=/data claude "process the data" # same path inside container CLAUDE_MOUNT_1=/opt/configs CLAUDE_MOUNT_2=/var/logs claude "go" # mount multiple CLAUDE_MOUNT_STUFF=/host/path:/container/path claude "do stuff" # explicit mapping CLAUDE_MOUNT_RO=/data:/data:ro claude "read the data" # read-only
If the value contains :, it's used as-is (docker -v syntax). Otherwise, same path on both sides.
bashclaude
Just like the native CLI but in a container. The container persists between runs — --continue resumes your last conversation automatically.
bashclaude --update # opt in to auto-update on this run claude --no-continue # start fresh (skip auto-resume of last conversation)
Some claude commands are passed through directly:
bashclaude --version # show claude version claude -v # same thing claude doctor # health check claude auth # manage authentication claude setup-token # interactive OAuth token setup claude stop # stop the running interactive container for this workspace claude clear-session # delete session history for this workspace (next run starts fresh)
Pass a prompt and get a response. -p is added automatically. No TTY, works from scripts, cron, CI, whatever.
bashclaude "explain this codebase" # plain text (default) claude "explain this codebase" --output-format json # JSON response claude "list all TODOs" --output-format json-verbose | jq . # JSON with full tool call history claude "list all TODOs" --output-format stream-json | jq . # streaming NDJSON claude "explain this codebase" --model opus # pick your model claude "review this" --system-prompt "You are a security auditor" # custom system prompt claude "review this" --append-system-prompt "Focus on SQL injection" # append to default claude "debug this" --effort max # go hard claude "quick question" --effort low # go fast claude "start over" --no-continue # fresh session claude "keep going" --resume abc123-def456 # resume specific session # structured output with JSON schema claude "extract the author and title" --output-format json \ --json-schema '{"type":"object","properties":{"author":{"type":"string"},"title":{"type":"string"}},"required":["author","title"]}'
--continue is passed automatically so successive programmatic runs share conversation context. Use --no-continue to start fresh or --resume <session_id> to continue a specific conversation.
Model selection
| Alias | Model | Best for |
|---|---|---|
opus | Claude Opus 4.6 | Complex reasoning, architecture, hard debugging |
sonnet | Claude Sonnet 4.6 | Daily coding, balanced speed/intelligence |
haiku | Claude Haiku 4.5 | Quick lookups, simple tasks, high volume |
opusplan | Opus (planning) + Sonnet (execution) | Best of both worlds |
sonnet[1m] | Sonnet with 1M context | Long sessions, huge codebases |
You can also pin specific versions with full model names (claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5-20251001, etc.). If not specified, defaults based on your account type.
Output formats
text (default) — plain text response.
json — single JSON object (all keys normalized to camelCase):
json{ "type": "result", "subtype": "success", "isError": false, "result": "the response text", "numTurns": 1, "durationMs": 3100, "totalCostUsd": 0.156, "sessionId": "...", "usage": { "inputTokens": 3, "outputTokens": 4, "cacheReadInputTokens": 512 }, "modelUsage": { "glm-5.1": { "inputTokens": 15702, "outputTokens": 28, "cacheReadInputTokens": 6836, "costUsd": 0.0826, "contextWindow": 200000, "maxOutputTokens": 32000 } }, "permissionDenials": [], "iterations": [] }
json-verbose — single JSON object like json, but with a turns array showing every tool call, tool result, and assistant message. Under the hood it runs stream-json and assembles the events into one response. Best of both worlds — one object to parse, full visibility into what Claude did:
json{ "type": "result", "subtype": "success", "result": "The hostname is mothership.", "turns": [ { "role": "assistant", "content": [ { "type": "tool_use", "id": "toolu_abc", "name": "Bash", "input": { "command": "hostname" } } ] }, { "role": "tool_result", "content": [ { "type": "toolResult", "toolUseId": "toolu_abc", "isError": false, "content": "mothership" } ] }, { "role": "assistant", "content": [ { "type": "text", "text": "The hostname is mothership." } ] } ], "system": { "sessionId": "...", "model": "claude-opus-4-6", "cwd": "/workspace", "tools": ["Bash", "Read", ...] }, "numTurns": 2, "durationMs": 10600, "totalCostUsd": 0.049, "sessionId": "..." }
stream-json — NDJSON stream, one event per line. All keys normalized to camelCase. Event types: system (init), assistant (text/tool_use), user (tool results), rateLimitEvent, result (final summary with cost). A typical multi-step run: system → (assistant → user) × N → result.
system — session init:
json{ "type": "system", "subtype": "init", "cwd": "/your/project", "sessionId": "...", "tools": ["Bash", "Read", "Write", "Glob", "Grep"], "model": "claude-opus-4-6", "permissionMode": "bypassPermissions" }
assistant — Claude's response (text or tool_use):
json{ "type": "assistant", "message": { "model": "claude-opus-4-6", "role": "assistant", "content": [{ "type": "text", "text": "I'll install cowsay first." }], "usage": { "inputTokens": 3, "outputTokens": 2 } } }
json{ "type": "assistant", "message": { "content": [ { "type": "tool_use", "id": "toolu_abc123", "name": "Bash", "input": { "command": "sudo apt-get install -y cowsay" } } ] } }
user — tool execution result:
json{ "type": "user", "message": { "content": [ { "toolUseId": "toolu_abc123", "type": "toolResult", "content": "Setting up cowsay (3.03+dfsg2-8) ...", "isError": false } ] } }
result — final summary:
json{ "type": "result", "subtype": "success", "isError": false, "numTurns": 10, "durationMs": 60360, "totalCostUsd": 0.203, "result": "Here's what I did:\n1. Installed cowsay..." }
Turn the container into an HTTP API server. Useful for integrating Claude into your services.
yaml# docker-compose.yml services: claude: image: psyb0t/claude-code:latest ports: - "8080:8080" environment: - CLAUDE_MODE_API=1 - CLAUDE_MODE_API_TOKEN=your-secret-token - CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-xxx volumes: - ~/.claude:/home/claude/.claude - /your/projects:/workspaces - /var/run/docker.sock:/var/run/docker.sock
Env vars
| Variable | What it does | Default |
|---|---|---|
CLAUDE_MODE_API | Set to 1 to run as HTTP API server instead of interactive/programmatic | (none) |
CLAUDE_MODE_API_PORT | Port for the API server | 8080 |
CLAUDE_MODE_API_TOKEN | Bearer token for API auth (optional) | (none) |
Endpoints
POST /run — send a prompt, get JSON back:
bashcurl -X POST http://localhost:8080/run \ -H "Authorization: Bearer your-secret-token" \ -H "Content-Type: application/json" \ -d '{"prompt": "what does this repo do", "workspace": "myproject"}'
| Field | Type | Description | Default |
|---|---|---|---|
prompt | string | The prompt to send | required |
workspace | string | Subpath under /workspaces (e.g. myproject → /workspaces/myproject) | /workspaces |
model | string | Model to use (same aliases as CLI) | account default |
systemPrompt | string | Replace the default system prompt | (none) |
appendSystemPrompt | string | Append to the default system prompt | (none) |
jsonSchema | string | JSON Schema for structured output | (none) |
effort | string | Reasoning effort (low, medium, high, max) | (none) |
outputFormat | string | Response format: json or json-verbose (includes tool call history) | json |
noContinue | bool | Start fresh (don't continue previous conversation) | false |
resume | string | Resume a specific session by ID | (none) |
fireAndForget | bool | Don't kill the process if the client disconnects | false |
Returns application/json. Default format is json (same as --output-format json). Use json-verbose to get a turns array with every tool call and result (see output formats above). Returns 409 if the workspace is already busy.
GET /files/{path} — list directory or download file:
bashcurl "http://localhost:8080/files" -H "Authorization: Bearer token" # list root curl "http://localhost:8080/files/myproject/src" -H "Authorization: Bearer token" # list subdir curl "http://localhost:8080/files/myproject/src/main.py" -H "Authorization: Bearer token" # download
PUT /files/{path} — upload a file (auto-creates parent dirs):
bashcurl -X PUT "http://localhost:8080/files/myproject/src/main.py" \ -H "Authorization: Bearer token" --data-binary @main.py
DELETE /files/{path} — delete a file:
bashcurl -X DELETE "http://localhost:8080/files/myproject/src/old.py" -H "Authorization: Bearer token"
GET /health — health check (no auth).
GET /status — which workspaces are busy.
POST /run/cancel?workspace=X — kill a running claude process.
All file paths are relative to /workspaces. Path traversal outside root is blocked.
OpenAI-compatible endpoints
The API also exposes an OpenAI-compatible adapter so tools like https://github.com/BerriAI/litellm, OpenAI SDKs, or anything that speaks chat/completions can connect directly. Unlike a plain model proxy, this runs the full Claude Code agentic CLI behind the scenes — it can read/write files, run commands, and use tools.
GET /openai/v1/models — list available models:
bashcurl http://localhost:8080/openai/v1/models # {"object":"list","data":[{"id":"haiku",...},{"id":"sonnet",...},{"id":"opus",...}]}
POST /openai/v1/chat/completions — chat completions (streaming and non-streaming):
bash# non-streaming curl -X POST http://localhost:8080/openai/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"haiku","messages":[{"role":"user","content":"hello"}]}' # streaming curl -X POST http://localhost:8080/openai/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"haiku","messages":[{"role":"user","content":"hello"}],"stream":true}'
Use the same model aliases as the CLI (haiku, sonnet, opus). system role messages become --system-prompt. Pass reasoning_effort (low/medium/high) to control effort — maps to claude's --effort. temperature, max_tokens, tools, and other OpenAI-specific fields are accepted but silently ignored. Provider prefixes are stripped automatically (claude-code/haiku → haiku).
Message handling:
_oai_uploads/conv_<id>.json). Claude Code reads the file and responds to the last user message, preserving the conversation context.Streaming ("stream": true) returns standard SSE events. Content arrives in message-level chunks (not character-by-character deltas) since Claude Code assembles full messages internally.
File workflow tip: for best performance, upload input files via PUT /files/..., tell Claude Code to work with them by path, then download the output files via GET /files/.... Much faster than embedding large content in the prompt.
Custom headers for claude-specific behavior:
| Header | Description |
|---|---|
X-Claude-Workspace | Workspace subpath under /workspaces |
X-Claude-Continue | Set to 1/true/yes to continue the previous session |
X-Claude-Append-System-Prompt | Text to append to the system prompt |
LiteLLM example:
pythonimport litellm response = litellm.completion( model="claude-code/haiku", messages=[{"role": "user", "content": "hello"}], api_base="http://localhost:8080/openai/v1", api_key="your-secret-token", # or any string if no token set ) print(response.choices[0].message.content)
MCP server
The API also exposes an MCP (Model Context Protocol) server at /mcp/ using streamable HTTP transport. Any MCP-compatible client (Claude Desktop, Claude Code, etc.) can connect to it. The claude_run tool runs the full Claude Code agentic CLI — it can read/write files, run commands, and use tools in the workspace, not just generate text.
json{ "mcpServers": { "claude-code": { "url": "http://localhost:8080/mcp/", "headers": { "Authorization": "Bearer your-secret-token" } } } }
If your MCP client doesn't support custom headers, pass the token as a query param: `http://localhost:8080/mcp/?apiToken=your-
您可以使用以下命令拉取该镜像。请将 <标签> 替换为具体的标签版本。如需查看所有可用标签版本,请访问 标签列表页面。




探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
docker search 限制
站内搜不到镜像
离线 save/load
插件要用 plugin install
WSL 拉取慢
安全与 digest
新手拉取配置
镜像合规机制
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
域名连通性排查
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务