
xabenet/ollamaOfficial Ollama image with complete CI/CD that push every ollama image from version v0.7.0 with Compute Capability 3.7 support to allow using older GPUs (like K80) with Ollama.
Ollama makes it easy to get up and running with large language models locally.
CPU only docker run -d -v ollama:/root/.ollama -p : --name ollama ollama/ollama Nvidia GPU Install the NVIDIA Container Toolkit.
Install with Apt
Configure the repository
curl -fsSL []
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L []
| sed 's#deb [] [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] []
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
Install the NVIDIA Container Toolkit packages
sudo apt-get install -y nvidia-container-toolkit
Install with Yum or Dnf
Configure the repository
curl -s -L [***]
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
Install the NVIDIA Container Toolkit packages
sudo yum install -y nvidia-container-toolkit
Configure Docker to use Nvidia driver
sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker Start the container
docker run -d --gpus=all -v ollama:/root/.ollama -p : --name ollama ollama/ollama AMD GPU To run Ollama using Docker with AMD GPUs, use the rocm tag and the following command:
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p : --name ollama ollama/ollama:rocm Run model locally Now you can run a model:
docker exec -it ollama ollama run llama3 Try different models More models can be found on the Ollama library.





manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务