专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像 官方专业版轩辕镜像 官方专业版官方专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 1072982923

hoist/consul Docker 镜像 - 轩辕镜像 | Docker 镜像高效稳定拉取服务

热门搜索:openclaw🔥nginx🔥redis🔥mysqlopenjdkcursorweb2apimemgraphzabbixetcdubuntucorednsjdk
consul
hoist/consul
自动构建
hoist
Hoist's Consul Server
下载次数: 0状态:自动构建维护者:hoist仓库类型:镜像最近更新:10 年前
轩辕镜像,快一点,稳很多。点击查看
镜像简介版本下载
轩辕镜像,快一点,稳很多。点击查看

Consul Agent in Docker

This project is a Docker container for Consul. It's a slightly opinionated, pre-configured Consul Agent made specifically to work in the Docker ecosystem.

Getting the container

The container is very small (50MB virtual, based on Busybox) and available on the Docker Index:

$ docker pull progrium/consul

Using the container

Just trying out Consul

If you just want to run a single instance of Consul Agent to try out its functionality:

$ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap

The Web UI can be enabled by adding the -ui-dir flag:

$ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap -ui-dir /ui

We publish 8400 (RPC), 8500 (HTTP), and 8600 (DNS) so you can try all three interfaces. We also give it a hostname of node1. Setting the container hostname is the intended way to name the Consul Agent node.

Our recommended interface is HTTP using curl:

$ curl localhost:8500/v1/catalog/nodes

We can also use dig to interact with the DNS interface:

$ dig @0.0.0.0 -p 8600 node1.node.consul

However, if you install Consul on your host, you can use the CLI to interact with the containerized Consul Agent:

$ consul members
Testing a Consul cluster on a single host

If you want to start a Consul cluster on a single host to experiment with clustering dynamics (replication, leader election), here is the recommended way to start a 3 node cluster.

Here we start the first node not with -bootstrap, but with -bootstrap-expect 3, which will wait until there are 3 peers connected before self-bootstrapping and becoming a working cluster.

$ docker run -d --name node1 -h node1 progrium/consul -server -bootstrap-expect 3

We can get the container's internal IP by inspecting the container. We'll put it in the env var JOIN_IP.

$ JOIN_IP="$(docker inspect -f '{{.NetworkSettings.IPAddress}}' node1)"

Then we'll start node2 and tell it to join node1 using $JOIN_IP:

$ docker run -d --name node2 -h node2 progrium/consul -server -join $JOIN_IP

Now we can start node3 the same way:

$ docker run -d --name node3 -h node3 progrium/consul -server -join $JOIN_IP

We now have a real three node cluster running on a single host. Notice we've also named the containers after their internal hostnames / node names.

We haven't published any ports to access the cluster, but we can use that as an excuse to run a fourth agent node in "client" mode (dropping the -server). This means it doesn't participate in the consensus quorum, but can still be used to interact with the cluster. It also means it doesn't need disk persistence.

$ docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name node4 -h node4 progrium/consul -join $JOIN_IP

Now we can interact with the cluster on those published ports and, if you want, play with killing, adding, and restarting nodes to see how the cluster handles it.

Running a real Consul cluster in a production environment

Setting up a real cluster on separate hosts is very similar to our single host cluster setup process, but with a few differences:

  • We assume there is a private network between hosts. Each host should have an IP on this private network
  • We're going to pass this private IP to Consul via the -advertise flag
  • We're going to publish all ports, including internal Consul ports (8300, 8301, 8302), on this IP
  • We set up a volume at /data for persistence. As an example, we'll bind mount /mnt from the host

Assuming we're on a host with a private IP of 10.0.1.1 and the IP of docker bridge docker0 is 172.17.42.1 we can start the first host agent:

$ docker run -d -h node1 -v /mnt:/data \
	-p 10.0.1.1:8300:8300 \
	-p 10.0.1.1:8301:8301 \
	-p 10.0.1.1:8301:8301/udp \
	-p 10.0.1.1:8302:8302 \
	-p 10.0.1.1:8302:8302/udp \
	-p 10.0.1.1:8400:8400 \
	-p 10.0.1.1:8500:8500 \
	-p 172.17.42.1:53:53/udp \
	progrium/consul -server -advertise 10.0.1.1 -bootstrap-expect 3

On the second host, we'd run the same thing, but passing a -join to the first node's IP. Let's say the private IP for this host is 10.0.1.2:

$ docker run -d -h node2 -v /mnt:/data  \
	-p 10.0.1.2:8300:8300 \
	-p 10.0.1.2:8301:8301 \
	-p 10.0.1.2:8301:8301/udp \
	-p 10.0.1.2:8302:8302 \
	-p 10.0.1.2:8302:8302/udp \
	-p 10.0.1.2:8400:8400 \
	-p 10.0.1.2:8500:8500 \
	-p 172.17.42.1:53:53/udp \
	progrium/consul -server -advertise 10.0.1.2 -join 10.0.1.1

And the third host with an IP of 10.0.1.3:

$ docker run -d -h node3 -v /mnt:/data  \
	-p 10.0.1.3:8300:8300 \
	-p 10.0.1.3:8301:8301 \
	-p 10.0.1.3:8301:8301/udp \
	-p 10.0.1.3:8302:8302 \
	-p 10.0.1.3:8302:8302/udp \
	-p 10.0.1.3:8400:8400 \
	-p 10.0.1.3:8500:8500 \
	-p 172.17.42.1:53:53/udp \
	progrium/consul -server -advertise 10.0.1.3 -join 10.0.1.1

That's it! Once this last node connects, it will bootstrap into a cluster. You now have a working cluster running in production on a private network.

Special Features

Runner command

Since the docker run command to start in production is so long, a command is available to generate this for you. Running with cmd:run <advertise-ip>[::<join-ip>[::client]] [docker-run-args...] will output an opinionated, but customizable docker run command you can run in a subshell. For example:

$ docker run --rm progrium/consul cmd:run 10.0.1.1 -d

Outputs:

eval docker run --name consul -h $HOSTNAME 	\
	-p 10.0.1.1:8300:8300 \
	-p 10.0.1.1:8301:8301 \
	-p 10.0.1.1:8301:8301/udp \
	-p 10.0.1.1:8302:8302 \
	-p 10.0.1.1:8302:8302/udp \
	-p 10.0.1.1:8400:8400 \
	-p 10.0.1.1:8500:8500 \
	-p 172.17.42.1:53:53/udp \
	-d 	\
	progrium/consul -server -advertise 10.0.1.1 -bootstrap-expect 3

By design, it will set the hostname of the container to your host hostname, it will name the container consul (though this can be overridden), it will bind port 53 to the Docker bridge, and the rest of the ports on the advertise IP. If no join IP is provided, it runs in -bootstrap-expect mode with a default of 3 expected peers. Here is another example, specifying a join IP and setting more docker run arguments:

$ docker run --rm progrium/consul cmd:run 10.0.1.1::10.0.1.2 -d -v /mnt:/data

Outputs:

eval docker run --name consul -h $HOSTNAME 	\
	-p 10.0.1.1:8300:8300 \
	-p 10.0.1.1:8301:8301 \
	-p 10.0.1.1:8301:8301/udp \
	-p 10.0.1.1:8302:8302 \
	-p 10.0.1.1:8302:8302/udp \
	-p 10.0.1.1:8400:8400 \
	-p 10.0.1.1:8500:8500 \
	-p 172.17.42.1:53:53/udp \
	-d -v /mnt:/data \
	progrium/consul -server -advertise 10.0.1.1 -join 10.0.1.2

You may notice it lets you only run with bootstrap-expect or join, not both. Using cmd:run assumes you will be bootstrapping with the first node and expecting 3 nodes. You can change the expected peers before bootstrap by setting the EXPECT environment variable.

To use this convenience, you simply wrap the cmd:run output in a subshell. Run this to see it work:

$ $(docker run --rm progrium/consul cmd:run 127.0.0.1 -it)

Client flag

Client nodes allow you to keep growing your cluster without impacting the performance of the underlying gossip protocol (they proxy requests to one of the server nodes and so are stateless).

To boot a client node using the runner command, append the string ::client onto the <advertise-ip>::<join-ip> argument. For example:

$ docker run --rm progrium/consul cmd:run 10.0.1.4::10.0.1.2::client -d

Would create the same output as above but without the -server consul argument.

Health checking with Docker

Consul lets you specify a shell script to run for health checks, similar to Nagios. As a container, those scripts run inside this container environment which is a minimal Busybox environment with bash and curl. For some, this is fairly limiting, so I've added some built-in convenience scripts to properly do health checking in a Docker system.

These all require you to mount the host's Docker socket to /var/run/docker.sock when you run the Consul container.

Using check-http

check-http <container-id> <port> <path> [curl-args...]

This utility performs curl based HTTP health checking given a container ID or name, an internal port (what the service is actually listening on inside the container) and a path. You can optionally provide extra arguments to curl.

The HTTP request is done in a separate ephemeral container that is attached to the target container's network namespace. The utility automatically determines the internal Docker IP to run the request against. A successful request will output the response headers into Consul. An unsuccessful request will output the reason the request failed and set the check to critical. By default, curl runs with --retry 2 to cover local transient errors.

Using check-cmd

check-cmd <container-id> <port> <command...>

This utility performs the specified command in a separate ephemeral container based on the target container's image that is attached to that container's network namespace. Very often, this is expected to be a health check script, but can be anything that can be run as a command on this container image. For convenience, an environment variable SERVICE_ADDR is set with the internal Docker IP and port specified here.

Using docker

The above health check utilities require the Docker binary, so it's already built-in to the container. If neither of the above fit your needs, and the container environment is too limiting, you can perform Docker operations directly to perform any containerized health check.

DNS

This container was designed assuming you'll be using it for DNS on your other containers. So it listens on port 53 inside the container to be more compatible and accessible via linking. It also has DNS recursive queries enabled, using the Google 8.8.8.8 nameserver.

When running with cmd:run, it publishes the DNS port on the Docker bridge. You can use this with the --dns flag in docker run, or better yet, use it with the Docker daemon options. Here is a command you can run on Ubuntu systems that will tell Docker to use the bridge IP for DNS, otherwise use Google DNS, and use service.consul as the search domain.

$ echo "DOCKER_OPTS='--dns 172.17.42.1 --dns 8.8.8.8 --dns-search service.consul'" >> /etc/default/docker

If you're using boot2docker on OS/X, rather than an Ubuntu host, it has a Tiny Core Linux VM running the docker containers. Use this command to set the extra Docker daemon options (as of boot2docker v1.3.1), which also uses the first DNS name server that your OS/X machine uses for name resolution outside of the boot2docker world.

$ boot2docker ssh sudo "ash -c \"echo EXTRA_ARGS=\'--dns 172.17.42.1 --dns $(scutil --dns | awk -F ': ' '/nameserver/{print $2}' | head -1) --dns-search service.consul\' > /var/lib/boot2docker/profile\""

With those extra options in place, within a Docker container, you have the appropriate entries automatically set in the /etc/resolv.conf file. To test it out, start a Docker container that has the dig utility installed (this example uses aanand/docker-dnsutils which is the Ubuntu image with dnsutils installed).

$ docker run --rm aanand/docker-dnsutils dig -t SRV consul +search
Runtime Configuration

Although you can extend this image to add configuration files to define services and checks, this container was designed for environments where services and checks can be configured at runtime via the HTTP API.

It's recommended you keep your check logic simple, such as using inline curl or ping commands. Otherwise, keep in mind the default shell is Bash, but you're running in Busybox.

If you absolutely need to customize startup configuration, you can extend this image by making a new Dockerfile based on this one and having a config directory containing config JSON files. They will be added to the image you build via ONBUILD hooks. You can also add packages with opkg. See docs on the Busybox image for more info.

Issue with quickly restarting a node using the same IP

When testing a cluster scenario, you may kill a container and restart it again on the same host and see that it has trouble re-joining the cluster.

There is an issue when you restart a node as a new container with the same published ports that will cause heartbeats to fail and the node will flap. This is an ARP table caching problem. If you wait about 3 minutes before starting again, it should work fine. You can also manually reset the cache.

Sponsor

This project was made possible thanks to DigitalOcean.

License

BSD

查看更多 consul 相关镜像 →
consul logo
consul
Docker 官方镜像
Consul是一种数据中心运行时工具,主要提供服务发现、配置管理和服务编排功能,能够助力分布式系统中的服务实现自动注册与发现、动态配置更新及服务生命周期协调管理,确保数据中心内各类服务高效、可靠地通信与协作,是构建现代化微服务架构和云原生应用的重要基础设施组件。
1.5千 次收藏10亿+ 次下载
1 年前更新
hashicorp/consul logo
hashicorp/consul
hashicorp
基于当前版本自动构建的Consul镜像。有关使用详情,请参见README。
89 次收藏5000万+ 次下载
1 个月前更新
bitnami/consul logo
bitnami/consul
bitnami
Bitnami提供的Consul安全镜像,用于服务发现、配置管理及服务网格部署,具备安全加固特性。
10 次收藏1000万+ 次下载
7 个月前更新
bitnamicharts/consul logo
bitnamicharts/consul
bitnamicharts
Bitnami提供的HashiCorp Consul Helm chart,用于在Kubernetes集群上部署和管理Consul服务发现与配置工具,支持安全强化、Prometheus监控及自定义配置等功能。
50万+ 次下载
7 个月前更新
openeuler/consul logo
openeuler/consul
openeuler
暂无描述
340 次下载
1 个月前更新
unifio/consul logo
unifio/consul
unifio
HashiCorp Consul
1 次收藏1000万+ 次下载
9 年前更新

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

Docker 配置

登录仓库拉取

通过 Docker 登录认证访问私有仓库

专属域名拉取

无需登录使用专属域名

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

系统配置

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

MacOS OrbStack

MacOS OrbStack 容器配置

Docker Compose

Docker Compose 项目配置

NAS 设备

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

极空间

极空间 NAS 系统配置服务

网络设备

爱快路由

爱快 iKuai 路由系统配置

宝塔面板

在宝塔面板一键配置镜像

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

使用与功能问题

docker search 报错:专属域名下仅支持 Docker Hub 查询

docker search 报错问题

网页搜不到镜像:Docker Hub 有但轩辕镜像搜索无结果

镜像搜索不到

离线传输镜像:无法直连时用 docker save/load 迁移

离线传输镜像

Docker 插件安装错误:application/vnd.docker.plugin.v1+json

Docker 插件安装错误

WSL 下 Docker 拉取慢:网络与挂载目录影响及优化

WSL 拉取镜像慢

轩辕镜像是否安全?镜像完整性校验(digest)说明

镜像安全性

如何用轩辕镜像拉取镜像?登录方式与专属域名配置

如何拉取镜像

错误码与失败问题

manifest unknown 错误:镜像不存在或标签错误

manifest unknown 错误

TLS/SSL 证书验证失败:Docker pull 时 HTTPS 证书错误

TLS 证书验证失败

DNS 解析超时:无法解析镜像仓库地址或连接超时

DNS 解析超时

410 Gone 错误:Docker 版本过低导致协议不兼容

410 错误:版本过低

402 Payment Required 错误:流量耗尽错误提示

402 错误:流量耗尽

401 UNAUTHORIZED 错误:身份认证失败或登录信息错误

身份认证失败错误

429 Too Many Requests 错误:请求频率超出专业版限制

429 限流错误

Docker login 凭证保存错误:Cannot autolaunch D-Bus(不影响登录)

凭证保存错误

账号 / 计费 / 权限

免费版与专业版区别:功能、限额与使用场景对比

免费版与专业版区别

支持的镜像仓库:Docker Hub、GCR、GHCR、K8s 等列表

轩辕镜像支持的镜像仓库

拉取失败是否扣流量?计费规则说明

拉取失败流量计费

KYSEC 权限不够:麒麟 V10/统信 UOS 下脚本执行被拦截

KYSEC 权限错误

如何申请开具发票?(增值税普票/专票)

开具发票

如何修改网站与仓库登录密码?

修改网站和仓库密码

配置与原理类

registry-mirrors 未生效:仍访问官方仓库或报错的原因

registry-mirrors 未生效

如何去掉镜像名称中的轩辕域名前缀?(docker tag)

去掉域名前缀

如何拉取指定架构镜像?(ARM64/AMD64 等多架构)

拉取指定架构镜像

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
hoist/consul
博客公告Docker 镜像公告与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
镜像拉取问题咨询请 提交工单,官方技术交流群:1072982923。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
镜像拉取问题咨询请提交工单,官方技术交流群:。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.