totycro/s3fsThis Docker image facilitates mounting of remote S3 buckets resources into containers. Mounting is performed through the fuse s3fs implementation. The image basically implements a docker volume on the cheap: Used with the proper creation options (see below) , you should be able to bind-mount back the remote bucket onto a host directory. This directory will make the content of the bucket available to processes, but also all other containers on the host. The image automatically unmount the remote bucket on container termination.
The image tags follow the versions from the s3fs implementation. New versions of s3fs will automatically be picked up when rebuilding. s3fs is compiled from the tagged git versions from the main repository.
Provided the existence of a directory called /mnt/tmp on the host, the
following command would mount a remote S3 bucket and bind-mount the remote
resource onto the host's /mnt/tmp in a way that makes the remote files
accessible to processes and/or other containers running on the same host.
Shelldocker run -it --rm \ --device /dev/fuse \ --cap-add SYS_ADMIN \ --security-opt "apparmor=unconfined" \ --env "AWS_S3_BUCKET=<bucketName>" \ --env "AWS_S3_ACCESS_KEY_ID=<accessKey>" \ --env "AWS_S3_SECRET_ACCESS_KEY=<secretKey>" \ --env UID=$(id -u) \ --env GID=$(id -g) \ -v /mnt/tmp:/opt/s3fs/bucket:rshared \ efrecon/s3fs
The --device, --cap-add and --security-opt options and their values are to
make sure that the container will be able to make available the S3 bucket
using FUSE. rshared is what ensures that bind mounting makes the files and
directories available back to the host and recursively to other containers.
A series of environment variables, most led by AWS_S3_ can be used to
parametrise the container:
AWS_S3_BUCKET should be the name of the bucket, this is mandatory.
AWS_S3_AUTHFILE is the path to an authorisation file compatible with the
format specified by s3fs. This can be empty, in which case data will be
taken from the other authorisation-related environment variables.
AWS_S3_ACCESS_KEY_ID is the access key to the S3 bucket, this is only used
whenever AWS_S3_AUTHFILE is empty.
AWS_S3_SECRET_ACCESS_KEY is the secret access key to the S3 bucket, this is
only used whenever AWS_S3_AUTHFILE is empty. Note however that the variable
AWS_S3_SECRET_ACCESS_KEY_FILE has precedence over this one.
AWS_S3_SECRET_ACCESS_KEY_FILE points instead to a file that will contain the
secret access key to the S3 bucket. When this is present, the password will be
taken from the file instead of from the AWS_S3_SECRET_ACCESS_KEY variable.
If that variable existed, it will be disregarded. This makes it easy to pass
passwords using Docker secrets. This is only ever used whenever
AWS_S3_AUTHFILE is empty.
AWS_S3_URL is the URL to the Amazon service. This can be used to mount
external services that implement a compatible API.
AWS_S3_MOUNT is the location within the container where to mounte the
WebDAV resource. This defaults to /opt/s3fs/bucket and is not really meant to
be changed.
UID is the user ID for the owner of the share inside the container.
GID is the group ID for the owner of the share inside the container.
S3FS_DEBUG can be set to 1 to get some debugging information from s3fs.
S3FS_ARGS can contain some additional options to passed to s3fs.
By default, this container will keep listing the content of the mounted
directory at regular intervals. This is implemented by the command
that it is designed to execute once the remote bucket has been mounted. If you
did not wish this behaviour, pass empty.sh as the command instead.
Note that both of these commands ensure that the remote bucket is unmounted from the mountpoint at termination, so you should really pick one or the other to allow for proper operation. If the mountpoint was not unmounted, your mount system will be unstable as it will contain an unknown entry.
Automatic unmounting is achieved through a combination of a trap in the
command being executed and tini. tini is made available directly in this
image to make it possible to run in Swarm environments.
The docker image has tags that automatically match the list of official
versions of s3fs. This is achieved through using the github API to discover
the list of tags starting with v and building a separate image for each of
them. The image itself builds upon alpine.
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
MacOS OrbStack 容器配置
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务