
ClickHouse Keeper是ClickHouse官方提供的分布式协调服务镜像,基于Apache ZooKeeper协议实现,专为ClickHouse集群设计。该镜像封装了ClickHouse Keeper服务,用于管理ClickHouse集群的核心元数据(如表结构、分区信息、集群拓扑等),确保分布式环境下的高可用性、数据一致性和服务可靠性,是构建ClickHouse分布式集群的关键组件。
bashdocker pull clickhouse/clickhouse-keeper
适用于测试或开发环境,通过以下命令启动单节点ClickHouse Keeper:
bashdocker run -d \ --name clickhouse-keeper \ -p 2181:2181 \ # 客户端连接端口(ZooKeeper兼容端口) -v $(pwd)/keeper-data:/var/lib/clickhouse-keeper \ # 数据持久化目录 clickhouse/clickhouse-keeper
生产环境推荐3节点集群部署(奇数节点避免脑裂),以下为docker-compose配置示例:
yamlversion: '3.8' services: keeper1: image: clickhouse/clickhouse-keeper container_name: keeper1 restart: always ports: - "2181:2181" # 客户端端口 - "2888:2888" # 节点间通信端口 - "3888:3888" # 选举端口 volumes: - ./keeper1/data:/var/lib/clickhouse-keeper # 数据目录 - ./keeper1/config:/etc/clickhouse-keeper # 配置目录 environment: - KEEPER_ID=1 - KEEPER_SERVERS=keeper1:2888:3888;keeper2:2888:3888;keeper3:2888:3888 networks: - clickhouse-net keeper2: image: clickhouse/clickhouse-keeper container_name: keeper2 restart: always ports: - "2182:2181" - "2889:2888" - "3889:3888" volumes: - ./keeper2/data:/var/lib/clickhouse-keeper - ./keeper2/config:/etc/clickhouse-keeper environment: - KEEPER_ID=2 - KEEPER_SERVERS=keeper1:2888:3888;keeper2:2888:3888;keeper3:2888:3888 networks: - clickhouse-net keeper3: image: clickhouse/clickhouse-keeper container_name: keeper3 restart: always ports: - "2183:2181" - "2890:2888" - "3890:3888" volumes: - ./keeper3/data:/var/lib/clickhouse-keeper - ./keeper3/config:/etc/clickhouse-keeper environment: - KEEPER_ID=3 - KEEPER_SERVERS=keeper1:2888:3888;keeper2:2888:3888;keeper3:2888:3888 networks: - clickhouse-net networks: clickhouse-net: driver: bridge
启动集群:
bashdocker-compose up -d
4.4.1 环境变量
| 环境变量 | 描述 | 默认值 |
|---|---|---|
KEEPER_ID | 集群节点唯一ID(整数),需在集群中唯一 | 1 |
KEEPER_SERVERS | 集群节点列表,格式为host:server_port:election_port;... | 空 |
DATA_DIR | 数据持久化目录 | /var/lib/clickhouse-keeper |
CLIENT_PORT | 客户端连接端口(ZooKeeper兼容端口) | 2181 |
SERVER_PORT | 节点间通信端口 | 2888 |
ELECTION_PORT | 选举端口 | 3888 |
4.4.2 配置文件(config.xml)
核心配置文件路径为/etc/clickhouse-keeper/config.xml,关键配置示例:
xml<clickhouse-keeper> <server_id>1</server_id> <!-- 节点ID,与KEEPER_ID一致 --> <data_path>/var/lib/clickhouse-keeper</data_path> <!-- 数据目录 --> <log_storage_path>/var/lib/clickhouse-keeper/log</log_storage_path> <!-- 日志存储目录 --> <snapshot_storage_path>/var/lib/clickhouse-keeper/snapshots</snapshot_storage_path> <!-- 快照目录 --> <coordination_settings> <operation_timeout_ms>10000</operation_timeout_ms> <!-- 操作超时时间 --> <session_timeout_ms>30000</session_timeout_ms> <!-- 会话超时时间 --> </coordination_settings> <servers> <!-- 集群节点配置,与KEEPER_SERVERS对应 --> <server> <id>1</id> <hostname>keeper1</hostname> <port>2888</port> <election_port>3888</election_port> </server> <server> <id>2</id> <hostname>keeper2</hostname> <port>2888</port> <election_port>3888</election_port> </server> <server> <id>3</id> <hostname>keeper3</hostname> <port>2888</port> <election_port>3888</election_port> </server> </servers> </clickhouse-keeper>
使用ZooKeeper客户端验证服务可用性:
bash# 安装ZooKeeper客户端(或使用容器) docker run --rm -it zookeeper zkCli.sh -server localhost:2181 # 连接成功后执行以下命令验证 ls / # 查看根节点,默认返回空列表(初始状态) create /test "hello" # 创建测试节点 get /test # 获取节点数据,应返回"hello"
DATA_DIR目录,避免容器重启导致数据丢失。SERVER_PORT(2888)和ELECTION_PORT(3888)端口互通。您可以使用以下命令拉取该镜像。请将 <标签> 替换为具体的标签版本。如需查看所有可用标签版本,请访问 标签列表页面。



探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
docker search 限制
站内搜不到镜像
离线 save/load
插件要用 plugin install
WSL 拉取慢
安全与 digest
新手拉取配置
镜像合规机制
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
域名连通性排查
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务