
clickhouse/clickhouse-keeperClickHouse Keeper是ClickHouse官方提供的分布式协调服务镜像,基于Apache ZooKeeper协议实现,专为ClickHouse集群设计。该镜像封装了ClickHouse Keeper服务,用于管理ClickHouse集群的核心元数据(如表结构、分区信息、集群拓扑等),确保分布式环境下的高可用性、数据一致性和服务可靠性,是构建ClickHouse分布式集群的关键组件。
bashdocker pull clickhouse/clickhouse-keeper
适用于测试或开发环境,通过以下命令启动单节点ClickHouse Keeper:
bashdocker run -d \ --name clickhouse-keeper \ -p 2181:2181 \ # 客户端连接端口(ZooKeeper兼容端口) -v $(pwd)/keeper-data:/var/lib/clickhouse-keeper \ # 数据持久化目录 clickhouse/clickhouse-keeper
生产环境推荐3节点集群部署(奇数节点避免脑裂),以下为docker-compose配置示例:
yamlversion: '3.8' services: keeper1: image: clickhouse/clickhouse-keeper container_name: keeper1 restart: always ports: - "2181:2181" # 客户端端口 - "2888:2888" # 节点间通信端口 - "3888:3888" # 选举端口 volumes: - ./keeper1/data:/var/lib/clickhouse-keeper # 数据目录 - ./keeper1/config:/etc/clickhouse-keeper # 配置目录 environment: - KEEPER_ID=1 - KEEPER_SERVERS=keeper1:2888:3888;keeper2:2888:3888;keeper3:2888:3888 networks: - clickhouse-net keeper2: image: clickhouse/clickhouse-keeper container_name: keeper2 restart: always ports: - "2182:2181" - "2889:2888" - "3889:3888" volumes: - ./keeper2/data:/var/lib/clickhouse-keeper - ./keeper2/config:/etc/clickhouse-keeper environment: - KEEPER_ID=2 - KEEPER_SERVERS=keeper1:2888:3888;keeper2:2888:3888;keeper3:2888:3888 networks: - clickhouse-net keeper3: image: clickhouse/clickhouse-keeper container_name: keeper3 restart: always ports: - "2183:2181" - "2890:2888" - "3890:3888" volumes: - ./keeper3/data:/var/lib/clickhouse-keeper - ./keeper3/config:/etc/clickhouse-keeper environment: - KEEPER_ID=3 - KEEPER_SERVERS=keeper1:2888:3888;keeper2:2888:3888;keeper3:2888:3888 networks: - clickhouse-net networks: clickhouse-net: driver: bridge
启动集群:
bashdocker-compose up -d
| 环境变量 | 描述 | 默认值 |
|---|---|---|
KEEPER_ID | 集群节点唯一ID(整数),需在集群中唯一 | 1 |
KEEPER_SERVERS | 集群节点列表,格式为host:server_port:election_port;... | 空 |
DATA_DIR | 数据持久化目录 | /var/lib/clickhouse-keeper |
CLIENT_PORT | 客户端连接端口(ZooKeeper兼容端口) | 2181 |
SERVER_PORT | 节点间通信端口 | 2888 |
ELECTION_PORT | 选举端口 | 3888 |
核心配置文件路径为/etc/clickhouse-keeper/config.xml,关键配置示例:
xml<clickhouse-keeper> <server_id>1</server_id> <!-- 节点ID,与KEEPER_ID一致 --> <data_path>/var/lib/clickhouse-keeper</data_path> <!-- 数据目录 --> <log_storage_path>/var/lib/clickhouse-keeper/log</log_storage_path> <!-- 日志存储目录 --> <snapshot_storage_path>/var/lib/clickhouse-keeper/snapshots</snapshot_storage_path> <!-- 快照目录 --> <coordination_settings> <operation_timeout_ms>***</operation_timeout_ms> <!-- 操作超时时间 --> <session_timeout_ms>30000</session_timeout_ms> <!-- 会话超时时间 --> </coordination_settings> <servers> <!-- 集群节点配置,与KEEPER_SERVERS对应 --> <server> <id>1</id> <hostname>keeper1</hostname> <port>2888</port> <election_port>3888</election_port> </server> <server> <id>2</id> <hostname>keeper2</hostname> <port>2888</port> <election_port>3888</election_port> </server> <server> <id>3</id> <hostname>keeper3</hostname> <port>2888</port> <election_port>3888</election_port> </server> </servers> </clickhouse-keeper>
使用ZooKeeper客户端验证服务可用性:
bash# 安装ZooKeeper客户端(或使用容器) docker run --rm -it zookeeper zkCli.sh -server localhost:2181 # 连接成功后执行以下命令验证 ls / # 查看根节点,默认返回空列表(初始状态) create /test "hello" # 创建测试节点 get /test # 获取节点数据,应返回"hello"
DATA_DIR目录,避免容器重启导致数据丢失。SERVER_PORT(2888)和ELECTION_PORT(3888)端口互通。

manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务