
manios/zookeeperBased on commit 9f00dd7 of official Zookeeper image version 3.3.6.
Dockerfile links3.4.5, latest (3.4.5/Dockerfile), so standard container linking will make it automatically available to the linked containers. Since the Zookeeper "fails fast" it's better to always restart it.
$ docker run --name some-app --link some-zookeeper:zookeeper -d application-that-uses-zookeeper
$ docker run -it --rm --link some-zookeeper:zookeeper manios/zookeeper zkCli.sh -server zookeeper
docker-composeExample docker-compose.yml for zookeeper:
yamlversion: '2' services: zoo1: image: manios/zookeeper:3.4.5 restart: always ports: - 2181:2181 - 2888:2888 - 3888:3888 environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2889:3889 server.3=zoo3:2890:3890 zoo2: image: manios/zookeeper:3.4.5 restart: always ports: - 2182:2181 - 2889:2889 - 3889:3889 environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2889:3889 server.3=zoo3:2890:3890 zoo3: image: manios/zookeeper:3.4.5 restart: always ports: - 2183:2181 - 2890:2890 - 3890:3890 environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2889:3889 server.3=zoo3:2890:3890
This will start Zookeeper in replicated mode. Run docker-compose up and wait for it to initialize completely. Ports 2181-2183 will be exposed.
Please be aware that setting up multiple servers on a single machine will not create any redundancy. If something were to happen which caused the machine to die, all of the zookeeper servers would be offline. Full redundancy requires that each server have its own machine. It must be a completely separate physical server. Multiple virtual machines on the same physical host are still vulnerable to the complete failure of that host.
*** using Docker Swarm when running Zookeeper in replicated mode.
Zookeeper configuration is located in /conf. One way to change it is mounting your config file as a volume:
$ docker run --name some-zookeeper --restart always -d -v $(pwd)/zoo.cfg:/conf/zoo.cfg manios/zookeeper
ZooKeeper recommended defaults are used if zoo.cfg file is not provided. They can be overridden using the following environment variables.
$ docker run -e "ZOO_INIT_LIMIT=10" --name some-zookeeper --restart always -d manios/zookeeper
ZOO_TICK_TIMEDefaults to 2000. ZooKeeper's tickTime
The length of a single tick, which is the basic time unit used by ZooKeeper, as measured in milliseconds. It is used to regulate heartbeats, and timeouts. For example, the minimum session timeout will be two ticks
ZOO_INIT_LIMITDefaults to 5. ZooKeeper's initLimit
Amount of time, in ticks (see tickTime), to allow followers to connect and sync to a leader. Increased this value as needed, if the amount of data managed by ZooKeeper is large.
ZOO_SYNC_LIMITDefaults to 2. ZooKeeper's syncLimit
Amount of time, in ticks (see tickTime), to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
ZOO_MAX_CLIENT_CNXNSDefaults to 60. ZooKeeper's maxClientCnxns
Limits the number of concurrent connections (at the socket level) that a single client, identified by IP address, may make to a single member of the ZooKeeper ensemble.
Environment variables below are mandatory if you want to run Zookeeper in replicated mode.
ZOO_MY_IDThe id must be unique within the ensemble and should have a value between 1 and 255. Do note that this variable will not have any effect if you start the container with a /data directory that already contains the myid file.
ZOO_SERVERSThis variable allows you to specify a list of machines of the Zookeeper ensemble. Each entry has the form of server.id=host:port:port. Entries are separated with space. Do note that this variable will not have any effect if you start the container with a /conf directory that already contains the zoo.cfg file.
This image is configured with volumes at /data and /datalog to hold the Zookeeper in-memory database snapshots and the transaction log of updates to the database, respectively.
Be careful where you put the transaction log. A dedicated transaction log device is key to consistent good performance. Putting the log on a busy device will adversely affect performance.
View license information for the software contained in this image.


探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务