镜像概述
本镜像基于https://hub.docker.com/_/python/%E6%9E%84%E5%BB%BA%EF%BC%8C%E5%B9%B6%E9%9B%86%E6%88%90%E4%BA%86NVIDIA%E6%8F%90%E4%BE%9B%E7%9A%84CUDA%E5%92%8CcuDNN%E7%BB%84%E4%BB%B6%EF%BC%8C%E6%97%A8%E5%9C%A8%E4%B8%BA%E9%9C%80%E8%A6%81GPU%E5%8A%A0%E9%80%9F%E7%9A%84Python%E5%BA%94%E7%94%A8%E6%8F%90%E4%BE%9B%E4%BE%BF%E6%8D%B7%E7%9A%84%E8%BF%90%E8%A1%8C%E7%8E%AF%E5%A2%83%E3%80%82
核心功能与特性
- 基于官方Python镜像,确保Python环境的稳定性和可靠性
- 集成NVIDIA CUDA Toolkit,支持GPU加速计算
- 可选集成cuDNN库,优化深度学习应用性能
- 提供多种Python版本(3.6、3.7、3.8)与CUDA版本(9.0、10.0、10.1、10.2)的组合
NVIDIA许可协议
下载并使用本镜像即表示您同意镜像中包含的NVIDIA软件的许可协议。
CUDA Toolkit许可协议
查看本镜像中包含的CUDA Toolkit许可协议,请点击此处。
cuDNN Toolkit许可协议
查看本镜像中包含的cuDNN Toolkit许可协议,请点击此处。
运行环境要求
运行本镜像需要安装https://github.com/NVIDIA/nvidia-docker%EF%BC%88nvidia-docker%EF%BC%89%E3%80%82
支持的标签
Python 3.8
CUDA 10.2
- https://github.com/cicdteam/python-cuda/blob/master/10.2/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.2/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.2/runtime/cudnn7/Dockerfile
CUDA 10.1
- https://github.com/cicdteam/python-cuda/blob/master/10.1/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.1/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.1/runtime/cudnn7/Dockerfile
CUDA 10.0
- https://github.com/cicdteam/python-cuda/blob/master/10.0/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.0/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.0/runtime/cudnn7/Dockerfile
CUDA 9.0
- https://github.com/cicdteam/python-cuda/blob/master/9.0/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/9.0/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/9.0/runtime/cudnn7/Dockerfile
Python 3.7
CUDA 10.2
- https://github.com/cicdteam/python-cuda/blob/master/10.2/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.2/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.2/runtime/cudnn7/Dockerfile
CUDA 10.1
- https://github.com/cicdteam/python-cuda/blob/master/10.1/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.1/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.1/runtime/cudnn7/Dockerfile
CUDA 10.0
- https://github.com/cicdteam/python-cuda/blob/master/10.0/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.0/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.0/runtime/cudnn7/Dockerfile
CUDA 9.0
- https://github.com/cicdteam/python-cuda/blob/master/9.0/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/9.0/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/9.0/runtime/cudnn7/Dockerfile
Python 3.6
CUDA 10.2
- https://github.com/cicdteam/python-cuda/blob/master/10.2/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.2/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.2/runtime/cudnn7/Dockerfile
CUDA 10.1
- https://github.com/cicdteam/python-cuda/blob/master/10.1/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.1/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.1/runtime/cudnn7/Dockerfile
CUDA 10.0
- https://github.com/cicdteam/python-cuda/blob/master/10.0/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.0/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/10.0/runtime/cudnn7/Dockerfile
CUDA 9.0
- https://github.com/cicdteam/python-cuda/blob/master/9.0/base/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/9.0/runtime/Dockerfile
- https://github.com/cicdteam/python-cuda/blob/master/9.0/runtime/cudnn7/Dockerfile
使用方法
前提条件
确保已安装https://github.com/NVIDIA/nvidia-docker%E3%80%82
基本运行命令
使用以下命令启动包含Python 3.8和CUDA 10.2的运行时镜像:
bash
docker run --gpus all pure/python:3.8-cuda10.2-runtime python --version
验证CUDA是否可用
在容器内执行以下Python代码验证CUDA是否正常工作:
bash
docker run --gpus all pure/python:3.8-cuda10.2-cudnn7-runtime python -c "import torch; print(torch.cuda.is_available())"
若输出为True,表示CUDA环境配置成功。
适用场景
- 机器学习与深度学习模型训练
- GPU加速的科学计算
- 需要CUDA支持的Python应用部署
- 多版本Python与CUDA环境测试