
kylefoxaustin/openwebuiDocker images combining OpenWebUI with Ollama for a seamless AI development experience.
kylefoxaustin/openwebui-ollama:latest or kylefoxaustin/openwebui-ollama:latest-cpu - CPU versionkylefoxaustin/openwebui-ollama:latest-gpu - GPU version with NVIDIA CUDA supportbashdocker run -d \ --name openwebui \ -p 8080:8080 \ -p ***:*** \ -v ollama-data:/root/.ollama \ -v openwebui-data:/app/backend/data \ kylefoxaustin/openwebui-ollama:latest
bashdocker run -d \ --name openwebui-gpu \ --gpus all \ -p 8080:8080 \ -p ***:*** \ -v ollama-data:/root/.ollama \ -v openwebui-data:/app/backend/data \ kylefoxaustin/openwebui-ollama:latest-gpu
Access the web interface at: http://localhost:8080
If you already have Ollama running on your host machine, you'll need to map the container's Ollama port to a different host port:
bashdocker run -d \ --name openwebui \ -p 8080:8080 \ -p ***:*** \ -v ollama-data:/root/.ollama \ -v openwebui-data:/app/backend/data \ kylefoxaustin/openwebui-ollama:latest
To run both CPU and GPU containers at the same time, use different port mappings:
bash# CPU Container docker run -d \ --name openwebui-cpu \ -p 8080:8080 \ -p ***:*** \ -v ollama-cpu-data:/root/.ollama \ -v openwebui-cpu-data:/app/backend/data \ kylefoxaustin/openwebui-ollama:latest-cpu # GPU Container docker run -d \ --name openwebui-gpu \ --gpus all \ -p 8081:8080 \ -p ***:*** \ -v ollama-gpu-data:/root/.ollama \ -v openwebui-gpu-data:/app/backend/data \ kylefoxaustin/openwebui-ollama:latest-gpu
Access the interfaces at:
To use OpenWebUI with an external Ollama instance (e.g., running on another server or container):
bashdocker run -d \ --name openwebui-only \ -p 8080:8080 \ -e OLLAMA_BASE_URL=http://<ollama-host>:*** \ -v openwebui-data:/app/backend/data \ kylefoxaustin/openwebui-ollama:latest
Replace <ollama-host> with the hostname or IP address of your Ollama server.
| Variable | Description | Default |
|---|---|---|
OLLAMA_HOST | Host for Ollama to listen on | 0.0.0.0 |
PORT | Port for OpenWebUI to listen on | 8080 |
HOST | Host for OpenWebUI to listen on | 0.0.0.0 |
OLLAMA_BASE_URL | URL for OpenWebUI to connect to Ollama | http://localhost:*** |
The following volumes are used for data persistence:
/root/.ollama: Ollama models and configuration/app/backend/data: OpenWebUI data (conversations, settings, etc.)Port Conflicts: If you see "address already in use" errors, you likely have another service using the same port. Use alternative ports as shown in the usage scenarios.
GPU not detected: Ensure your NVIDIA drivers are properly installed and the NVIDIA Container Toolkit is set up correctly. Test with:
bashdocker run --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
Container crashes: Check logs with:
bashdocker logs openwebui
Models not loading: The first time you pull a model might take some time. Check the Ollama logs:
bashdocker exec -it openwebui cat /var/log/supervisor/ollama.err.log
Web UI not accessible: Make sure that the internal Ollama instance is properly running:
bashdocker exec -it openwebui curl -s http://localhost:***/api/tags
For more complex setups, you can use Docker Compose. Here's an example configuration:
yamlversion: '3.8' services: openwebui: image: kylefoxaustin/openwebui-ollama:latest-gpu container_name: openwebui restart: unless-stopped ports: - "8080:8080" - "***:***" volumes: - ollama-data:/root/.ollama - openwebui-data:/app/backend/data environment: - OLLAMA_HOST=0.0.0.0 - PORT=8080 - HOST=0.0.0.0 - OLLAMA_BASE_URL=http://localhost:*** deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] volumes: ollama-data: openwebui-data:
Save this to docker-compose.yml and run with:
bashdocker-compose up -d
To control CPU and memory usage:
bashdocker run -d \ --name openwebui \ --cpus 4 \ --memory 8G \ -p 8080:8080 \ -p ***:*** \ -v ollama-data:/root/.ollama \ -v openwebui-data:/app/backend/data \ kylefoxaustin/openwebui-ollama:latest
If you want to build the images yourself:
bash# CPU image docker build -f Dockerfile.cpu -t openwebui-ollama:cpu . # GPU image docker build -f Dockerfile.gpu -t openwebui-ollama:gpu .
These Docker images combine OpenWebUI and Ollama, each with their respective licenses. See the original projects for more information.
kylefoxaustin





manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务