专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏

nvidia/cuda Docker 镜像 - 轩辕镜像

cuda
nvidia/cuda
英伟达在GitLab仓库(gitlab.com/nvidia/cuda)提供的CUDA(并行计算平台)和cuDNN(深度神经网络加速库)镜像,为开发者提供了预配置的开发环境,支持高效进行并行计算应用开发、深度学习模型训练及推理任务,确保了环境的一致性和部署的便捷性,是构建基于英伟达GPU加速应用的重要资源。
1946 收藏0 次下载activenvidia镜像
🚀专业版镜像服务,面向生产环境设计
版本下载
🚀专业版镜像服务,面向生产环境设计

NVIDIA CUDA

CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers can dramatically speed up computing applications by harnessing the power of GPUs.

The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.

The CUDA container images provide an easy-to-use distribution for CUDA supported platforms and architectures.

End User License Agreements

The images are governed by the following NVIDIA End User License Agreements. By pulling and using the CUDA images, you accept the terms and conditions of these licenses. Since the images may include components licensed under open-source licenses such as GPL, the sources for these components are archived here.

NVIDIA Deep learning Container License

To view the NVIDIA Deep Learning Container license, click here

Documentation

For more information on CUDA, including the release notes, programming model, APIs and developer tools, visit the CUDA documentation site.

Announcement

CUDA Container Support Policy

CUDA image container tags have a lifetime. The tags will be deleted Six Months after the last supported "Tesla Recommended Driver" has gone end-of-life OR a newer update release has been made for the same CUDA version.

Please see CUDA Container Support Policy for more information.

Breaking changes are announced on Gitlab Issue #209.

Cuda Repo Signing Key has Changed!

This may present itself as the following errors.

debian:

Reading package lists... Done
W: GPG error: [***]  InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC
W: The repository '[***]  InRelease' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.

RPM:

warning: /var/cache/dnf/cuda-fedora32-x86_64-d60aafcddb176bf5/packages/libnvjpeg-11-1-11.3.0.105-1.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d42d0685: NOKEY
cuda-fedora32-x86_64                                                                                  23 kB/s | 1.6 kB     00:00
Importing GPG key 0x7FA2AF80:
 Userid     : "cudatools <***>"
 Fingerprint: AE09 FE4B BD22 3A84 B2CC FCE3 F60F 4B3D 7FA2 AF80
 From       : [***]
Is this ok [y/N]: y
Key imported successfully
Import of key(s) didn't help, wrong key(s)?
Public key for libnvjpeg-11-1-11.3.0.105-1.x86_64.rpm is not installed. Failing package is: libnvjpeg-11-1-11.3.0.105-1.x86_64
 GPG Keys are configured as: [***]
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: GPG check FAILED

Updated images will be pushed out over the next few days containing the new repo key. Please follow progress using the links below:

  • [***]
  • [***]
Multi-arch image manifests are now LIVE for all supported CUDA container image versions

It is now possible to build CUDA container images for all supported architectures using Docker Buildkit in one step. See the example script below.

The deprecated image names nvidia/cuda-arm64 and nvidia/cuda-ppc64le will remain available, but no longer supported.

The following product pages still exist but will no longer be supported:

  • [***]
  • [***]

The following gitlab repositories will be archived:

  • [***]
Deprecated: "latest" tag

The "latest" tag for CUDA, CUDAGL, and OPENGL images has been deprecated on NGC and Docker Hub.

With the removal of the latest tag, the following use case will result in the "manifest unknown" error:

$ docker pull nvidia/cuda
Error response from daemon: manifest for nvidia/cuda:latest not found: manifest unknown: manifest
unknown

This is not a bug.

Overview of Images

Three flavors of images are provided:

  • base: Includes the CUDA runtime (cudart)
  • runtime: Builds on the base and includes the CUDA math libraries, and NCCL. A runtime image that also includes cuDNN is available. Some images may also include TensorRT.
  • devel: Builds on the runtime and includes headers, development tools for building CUDA images. These images are particularly useful for multi-stage builds.

The Dockerfiles for the images are open-source and licensed under 3-clause BSD. For more information see the Supported Tags section below.

NVIDIA Container Toolkit

The NVIDIA Container Toolkit for Docker is required to run CUDA images.

For CUDA 10.0, nvidia-docker2 (v2.1.0) or greater is recommended. It is also recommended to use Docker 19.03.

How to report a problem

Read NVIDIA Container Toolkit Frequently Asked Questions to see if the problem has been encountered before.

After it has been determined the problem is not with the NVIDIA runtime, report an issue at the CUDA Container Image Issue Tracker.

Supported tags

Supported tags are updated to the latest CUDA, cuDNN, and TensorRT versions. These tags are also periodically updated to fix CVE vulnerabilities.

For a full list of supported tags, click here.

LATEST CUDA 13.1.0

Visit OpenSource @ Nvidia for the GPL sources of the packages contained in the CUDA base image layers.

ubuntu24.04 [arm64, x86_64]
  • 13.1.0-runtime-ubuntu24.04 (13.1.0/ubuntu2404/runtime/Dockerfile)
  • 13.1.0-devel-ubuntu24.04 (13.1.0/ubuntu2404/devel/Dockerfile)
  • 13.1.0-base-ubuntu24.04 (13.1.0/ubuntu2404/base/Dockerfile)
ubuntu22.04 [arm64, x86_64]
  • 13.1.0-runtime-ubuntu22.04 (13.1.0/ubuntu2204/runtime/Dockerfile)
  • 13.1.0-devel-ubuntu22.04 (13.1.0/ubuntu2204/devel/Dockerfile)
  • 13.1.0-base-ubuntu22.04 (13.1.0/ubuntu2204/base/Dockerfile)
ubi9 [arm64, x86_64]
  • 13.1.0-runtime-ubi9 (13.1.0/ubi9/runtime/Dockerfile)
  • 13.1.0-devel-ubi9 (13.1.0/ubi9/devel/Dockerfile)
  • 13.1.0-base-ubi9 (13.1.0/ubi9/base/Dockerfile)
ubi8 [arm64, x86_64]
  • 13.1.0-runtime-ubi8 (13.1.0/ubi8/runtime/Dockerfile)
  • 13.1.0-devel-ubi8 (13.1.0/ubi8/devel/Dockerfile)
  • 13.1.0-base-ubi8 (13.1.0/ubi8/base/Dockerfile)
ubi10 [arm64, x86_64]
  • 13.1.0-runtime-ubi10 (13.1.0/ubi10/runtime/Dockerfile)
  • 13.1.0-devel-ubi10 (13.1.0/ubi10/devel/Dockerfile)
  • 13.1.0-base-ubi10 (13.1.0/ubi10/base/Dockerfile)
rockylinux9 [arm64, x86_64]
  • 13.1.0-runtime-rockylinux9 (13.1.0/rockylinux9/runtime/Dockerfile)
  • 13.1.0-devel-rockylinux9 (13.1.0/rockylinux9/devel/Dockerfile)
  • 13.1.0-base-rockylinux9 (13.1.0/rockylinux9/base/Dockerfile)
rockylinux8 [arm64, x86_64]
  • 13.1.0-runtime-rockylinux8 (13.1.0/rockylinux8/runtime/Dockerfile)
  • 13.1.0-devel-rockylinux8 (13.1.0/rockylinux8/devel/Dockerfile)
  • 13.1.0-base-rockylinux8 (13.1.0/rockylinux8/base/Dockerfile)
rockylinux10 [arm64, x86_64]
  • 13.1.0-runtime-rockylinux10 (13.1.0/rockylinux10/runtime/Dockerfile)
  • 13.1.0-devel-rockylinux10 (13.1.0/rockylinux10/devel/Dockerfile)
  • 13.1.0-base-rockylinux10 (13.1.0/rockylinux10/base/Dockerfile)
oraclelinux9 [arm64, x86_64]
  • 13.1.0-runtime-oraclelinux9 (13.1.0/oraclelinux9/runtime/Dockerfile)
  • 13.1.0-devel-oraclelinux9 (13.1.0/oraclelinux9/devel/Dockerfile)
  • 13.1.0-base-oraclelinux9 (13.1.0/oraclelinux9/base/Dockerfile)
oraclelinux8 [arm64, x86_64]
  • 13.1.0-runtime-oraclelinux8 (13.1.0/oraclelinux8/runtime/Dockerfile)
  • 13.1.0-devel-oraclelinux8 (13.1.0/oraclelinux8/devel/Dockerfile)
  • 13.1.0-base-oraclelinux8 (13.1.0/oraclelinux8/base/Dockerfile)
opensuse15 [x86_64]
  • 13.1.0-runtime-opensuse15 (13.1.0/opensuse15/runtime/Dockerfile)
  • 13.1.0-devel-opensuse15 (13.1.0/opensuse15/devel/Dockerfile)
  • 13.1.0-base-opensuse15 (13.1.0/opensuse15/base/Dockerfile)
azl3 [arm64, x86_64]
  • 13.1.0-runtime-azl3 (13.1.0/azl3/runtime/Dockerfile)
  • 13.1.0-devel-azl3 (13.1.0/azl3/devel/Dockerfile)
  • 13.1.0-base-azl3 (13.1.0/azl3/base/Dockerfile)
amzn2023 [arm64, x86_64]
  • 13.1.0-runtime-amzn2023 (13.1.0/amzn2023/runtime/Dockerfile)
  • 13.1.0-devel-amzn2023 (13.1.0/amzn2023/devel/Dockerfile)
  • 13.1.0-base-amzn2023 (13.1.0/amzn2023/base/Dockerfile)
Unsupported tags

A list of tags that are no longer supported can be found here

Source of this description

This Readme is located in the doc directory of the CUDA Container Image source repository. (history)

Deployment & Usage Documentation

NVIDIA CUDA 镜像 Docker 容器化部署全流程

CPU 像“全能但慢的多面手”,适合处理逻辑复杂但数据量小的任务;GPU 像“成千上万的小工人”,擅长同时处理大量重复、简单的计算。CUDA 就是连接开发者与 GPU 能力的“桥梁”,让 GPU 能脱离显卡驱动,直接为科学计算、AI 训练、数据处理等任务服务。

Read More
查看更多 cuda 相关镜像 →
rocker/cuda logo
rocker/cuda
by rocker
集成NVIDIA CUDA库的Rocker镜像,提供R语言环境下的GPU加速计算支持,基于rocker-org/rocker-versioned2项目构建。
1050K+ pulls
上次更新:13 天前
giantswarm/cuda logo
giantswarm/cuda
by giantswarm
暂无描述
50K+ pulls
上次更新:2 个月前
mesonbuild/cuda logo
mesonbuild/cuda
by mesonbuild
暂无描述
10K+ pulls
上次更新:8 天前

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

登录仓库拉取

通过 Docker 登录认证访问私有仓库

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

Docker Compose

Docker Compose 项目配置

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

宝塔面板

在宝塔面板一键配置镜像

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

极空间

极空间 NAS 系统配置服务

爱快路由

爱快 iKuai 路由系统配置

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

专属域名拉取

无需登录使用专属域名

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

轩辕镜像免费版与专业版有什么区别?

免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。

轩辕镜像支持哪些镜像仓库?

专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。

流量耗尽错误提示

当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。

410 错误问题

通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。

manifest unknown 错误

先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。

镜像拉取成功后,如何去掉轩辕镜像域名前缀?

使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

oldzhang的头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
nvidia/cuda
官方博客Docker 镜像使用技巧与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
提交工单
免费获取在线技术支持请 提交工单,官方QQ群:13763429 。
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
免费获取在线技术支持请提交工单,官方QQ群: 。
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.
轩辕镜像 官方专业版 Logo
轩辕镜像轩辕镜像官方专业版
首页个人中心搜索镜像
交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 13763429