
corundex/comfyui-rocm🔥 ComfyUI with AMD ROCm support - Run ComfyUI on AMD GPUs with optimized ROCm-compatible dependencies.
/models:/workspace/ComfyUI/models \ -v $(pwd)/output:/workspace/ComfyUI/output \ corundex/comfyui-rocm:latest
Access ComfyUI at: http://localhost:8188
bash# Ubuntu/Debian curl -fsSL [***] | sudo gpg --dearmor -o /etc/apt/keyrings/rocm.gpg echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] [***] jammy main" | sudo tee /etc/apt/sources.list.d/rocm.list sudo apt update && sudo apt install rocm-dkms sudo usermod -a -G render,video $USER
bashrocm-smi # Should show your AMD GPU(s)
bashdocker run -d \ --name comfyui-rocm \ --device=/dev/kfd \ --device=/dev/dri \ --group-add=video \ -p 8188:8188 \ -v ./models:/workspace/ComfyUI/models \ -v ./output:/workspace/ComfyUI/output \ -v ./input:/workspace/ComfyUI/input \ -v ./custom_nodes:/workspace/ComfyUI/custom_nodes \ corundex/comfyui-rocm:latest
| Host Path | Container Path | Purpose |
|---|---|---|
./models | /workspace/ComfyUI/models | Store models (checkpoints, VAE, LoRA, etc.) |
./output | /workspace/ComfyUI/output | Generated images and videos |
./input | /workspace/ComfyUI/input | Input images for processing |
./custom_nodes | /workspace/ComfyUI/custom_nodes | Custom ComfyUI extensions |
The ComfyUI Docker image includes an intelligent model management system that automatically downloads models based on your needs.
Control model downloading with the MODEL_DOWNLOAD environment variable:
MODEL_DOWNLOAD=default (Default)Downloads essential model to get started:
MODEL_DOWNLOAD=commonDownloads comprehensive starter set:
MODEL_DOWNLOAD=realisticDownloads realistic photo models:
MODEL_DOWNLOAD=photorealisticDownloads SDXL-based photorealistic models:
MODEL_DOWNLOAD=artisticDownloads creative/stylized models:
MODEL_DOWNLOAD=allDownloads everything from all model sets above (⚠️ Large download ~100GB+)
MODEL_DOWNLOAD=noneSkips all downloads - uses only existing models in volumes
Add your own sections to models.yaml and use them:
bashMODEL_DOWNLOAD=mycustom
bashdocker run -d \ --device=/dev/kfd --device=/dev/dri --group-add=video \ -p 8188:8188 \ -v ./models:/workspace/ComfyUI/models \ corundex/comfyui-rocm:latest
bashdocker run -d \ --device=/dev/kfd --device=/dev/dri --group-add=video \ -p 8188:8188 \ -e MODEL_DOWNLOAD=common \ -v ./models:/workspace/ComfyUI/models \ corundex/comfyui-rocm:latest
bashdocker run -d \ --device=/dev/kfd --device=/dev/dri --group-add=video \ -p 8188:8188 \ -e MODEL_DOWNLOAD=all \ -v ./models:/workspace/ComfyUI/models \ corundex/comfyui-rocm:latest
bashdocker run -d \ --device=/dev/kfd --device=/dev/dri --group-add=video \ -p 8188:8188 \ -e MODEL_DOWNLOAD=none \ -v ./models:/workspace/ComfyUI/models \ corundex/comfyui-rocm:latest
On startup, see what's available:
[ComfyUI] Current model inventory: checkpoints: 3 models vae: 1 models loras: 2 models upscale_models: 1 models controlnet: 3 models embeddings: 1 models
models.yamlAdd your own model sections in YAML format:
yamlmodels: mycustom: - name: "My Custom Model" url: "[***]" path: "checkpoints/my_model.safetensors" min_size: 2000000000 - name: "Another Model" url: "[***]" path: "checkpoints/model2.safetensors" min_size: 4000000000
/workspace/ComfyUI/models/http://localhost:8188sample_workflow.json (included in this directory)Create docker-compose.yml:
yamlversion: '3.8' services: comfyui-rocm: image: corundex/comfyui-rocm:latest container_name: comfyui-rocm devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri group_add: - video ports: - "8188:8188" volumes: # Model storage (checkpoints, VAE, LoRA, etc.) - ./data/models:/workspace/ComfyUI/models # Generated images and outputs - ./data/output:/workspace/ComfyUI/output # Input images for processing - ./data/input:/workspace/ComfyUI/input # Custom nodes and extensions - ./data/custom_nodes:/workspace/ComfyUI/custom_nodes # ComfyUI user settings and workflows - ./data/user:/workspace/ComfyUI/user environment: # Model download behavior: default, common, realistic, photorealistic, artistic, all, none - MODEL_DOWNLOAD=default # ROCm environment - HIP_VISIBLE_DEVICES=0 - CUDA_VISIBLE_DEVICES="" restart: unless-stopped
Run with: docker compose up -d
./data/ ├── models/ # AI models (checkpoints, VAE, LoRA, etc.) │ ├── checkpoints/ # Main AI models (.safetensors) │ ├── vae/ # VAE models for better image quality │ ├── loras/ # LoRA fine-tuning models │ ├── embeddings/ # Text embeddings │ ├── controlnet/ # ControlNet guidance models │ └── upscale_models/ # Image upscaler models ├── output/ # Generated images and videos ├── input/ # Input images for processing ├── custom_nodes/ # ComfyUI extensions └── user/ # User settings and saved workflows
default: Test with basic model firstall: Download full set when readybash# Check ROCm drivers rocm-smi # Check Docker GPU access docker run --rm --device=/dev/kfd --device=/dev/dri rocm/pytorch:latest rocm-smi
bash# Verify ROCm in container docker exec comfyui-rocm python -c "import torch; print(f'ROCm: {torch.cuda.is_available()}')"
This Docker image packages ComfyUI with ROCm support. ComfyUI is licensed under GPL-3.0.
Found an issue or want to contribute? Visit our GitHub repository to:
This project is licensed under GPL-3.0.

manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务