
syaffers/llama.cppThis Docker image was built via the Dockerfile in the official llama.cpp repo on GitHub.
The following change was made to .devops/cpu.Dockerfile:
gitdiff --git a/.devops/cpu.Dockerfile b/.devops/cpu.Dockerfile index 6e16ecda4..8d5e3bf01 *** --- a/.devops/cpu.Dockerfile +++ b/.devops/cpu.Dockerfile @@ -12,7 +12,7 @@ WORKDIR /app COPY . . RUN if [ "$TARGETARCH" = "amd64" ] || [ "$TARGETARCH" = "arm64" ]; then \ - cmake -S . -B build -DCMAKE_BUILD_TYPE=Release -DGGML_NATIVE=OFF -DLLAMA_BUILD_TESTS=OFF -DGGML_BACKEND_DL=ON -DGGML_CPU_ALL_VARIANTS=ON; \ + cmake -S . -B build -DCMAKE_BUILD_TYPE=Release -DGGML_NATIVE=ON -DLLAMA_BUILD_TESTS=OFF; \ else \ echo "Unsupported architecture"; \ exit 1; \
to allow the image to be built properly for the Mac M3.
To build the image, clone the repo:
bashgit clone [***]
Check out to the desired tag:
bashgit checkout b7129
Apply the diff above and run the build:
bashdocker build --target server -t <repo name>:server-b7129-apple --build-arg TARGETARCH=arm64 -f .devops/cpu.Dockerfile .




manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务