This repository provides a minimal CPU-only Ollama Docker image, specifically designed to run on systems without GPU support. At just 70MB, this image is significantly smaller than the official Ollama image, which is around 4GB.
ollama latest b99944c07117 3 hours ago 69.3MB
https://github.com/alpine-docker/ollama
https://github.com/alpine-docker/ollama/actions
https://hub.docker.com/r/alpine/ollama/tags/
Lightweight: The official Ollama image is over 4GB in size, which can be overkill for systems that only need CPU-based processing. This image is only 70MB, making it much faster to download and deploy.
CPU-only Support: This image is tailored for systems without GPUs. It ensures you can run Ollama efficiently, even on basic or resource-constrained environments, without needing specialized hardware.
Run Anywhere: Whether you're working on local servers, edge devices, or cloud environments that don’t offer GPU resources, this image allows you to run Ollama anywhere, focusing purely on CPU-based operations.
bashdocker pull alpine/ollama
docker rm -f ollama docker run -d -p 11434:11434 -v ~/.ollama/root/.ollama --name ollama alpine/ollama
llama3.2, only run once. It will save the model locally, you can re-use it later.docker exec -ti ollama ollama pull llama3.2
If you don't want to download, you can choice to use alpine/llama3.2 image directly. I create this with model "llama3.2" integrated already
docker run -d -p 11434:11434 --name llama3.2 alpine/llama3.2
$ curl http://localhost:11434/api/generate -d '{ "model": "llama3.2", "prompt":"Why is the sky blue?" }' {"model":"llama3.2","created_at":"2024-10-16T00:25:58.59931201Z","response":"The","done":false} {"model":"llama3.2","created_at":"2024-10-16T00:25:58.695826838Z","response":" sky","done":false} {"model":"llama3.2","created_at":"2024-10-16T00:25:58.780917761Z","response":" appears","done":false} {"model":"llama3.2","created_at":"2024-10-16T00:25:58.992556209Z","response":" blue","done":false} {"model":"llama3.2","created_at":"2024-10-16T00:25:59.085970606Z","response":" because","done":false} {"model":"llama3.2","created_at":"2024-10-16T00:25:59.30869749Z","response":" of","done":false} ...
If you monitor the CPU usage, for example, with htop, you would see the high CPU usage
You can deploy the Ollama web UI to chat with it directly. There are many tools available, but I won't recommend any specific one.
this image could be deployed to any enviornment, for example, in kubernetes cluster, you can use it to analyze logs, streamlining logs with local LLMs, etc.
Announce — Minimal CPU-only Ollama Docker Image - [***]
以下是 alpine/ollama 相关的常用 Docker 镜像,适用于 不同场景 等不同场景:
您可以使用以下命令拉取该镜像。请将 <标签> 替换为具体的标签版本。如需查看所有可用标签版本,请访问 标签列表页面。





探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
docker search 限制
站内搜不到镜像
离线 save/load
插件要用 plugin install
WSL 拉取慢
安全与 digest
新手拉取配置
镜像合规机制
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
域名连通性排查
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务