modelrockettier/corosync-qnetdSets up a Corosync v3 QNet Daemon for use with Proxmox v6.
[]([***]
This allows you to deploy an external voter on a server that is not running Proxmox (e.g. a NAS). The external voter mainly serves to break ties (e.g. if the cluster has an even number of nodes).
See <[***]> for more information.
NOTE: This container does not have an SSH server installed, so setting it up is a bit more
involved than a simple pvecm qdevice setup.
This assumes that you have at least 2 Proxmox nodes and a separate docker server.
You will also need to set a few environment variables (or manually replace them with the appropriate values in the commands below):
CLUSTER_NAME: The name of your Proxmox cluster (must match the cluster_name key in
/etc/corosync/corosync.conf on your Proxmox nodes.
E.g.
CLUSTER_NAME=cluster1
PROXMOX_NODE: The credentials to ssh into your first Proxmox node (usually user@host or
user@ip).
E.g.
PROXMOX_NODE=root@proxmox1
QNETD_DATA: Where to store the corosync-qnetd config data on the docker host.
E.g.
QNETD_DATA=/etc/corosync-data
The general flow is to set up the qnet device with 1 Proxmox node, then to add the rest of the Proxmox nodes afterwards.
You will need to run some commands on the docker host and some on the initial Proxmox node. Instructions below are prefixed with [docker] and [proxmox] respectively depending on where they need to be run.
It makes no difference which Proxmox node you pick for the initial set up, but being able to directly transfer files (e.g. via scp) between it and the docker host will make it easier.
[docker] Pull the docker corosync-qnetd container (or build it from this repo)
docker pull modelrockettier/corosync-qnetd
[docker] Create and start the docker corosync-qnetd container:
docker run -d --name=qnetd --cap-drop=ALL -p 5403:5403 \ -v ${QNETD_DATA}:/etc/corosync modelrockettier/corosync-qnetd
[docker] Copy the QNetd CA certificate to the first Proxmox node:
scp ${QNETD_DATA}/qnetd/nssdb/qnetd-cacert.crt \ ${PROXMOX_NODE}:/etc/pve/corosync/qdevice/net/nssdb/
/etc/pve directory,
it's easiest to just copy the CA certificate there on 1 node, and it'll automatically
propogate to all other nodes. This way you won't need to copy it over to the other nodes
individually.[proxmox] Install the corosync-qdevice package on the first Proxmox node:
apt-get install corosync-qdevice
[proxmox] Start and enable the corosync-qdevice service on the first Proxmox node:
systemctl start corosync-qdevice systemctl enable corosync-qdevice
/etc/init.d/corosync-qdevice and try again.
See <[***]>[proxmox] Initialize the corosync-qdevice certificate database on the first Proxmox node:
corosync-qdevice-net-certutil -i \ -c /etc/pve/corosync/qdevice/net/nssdb/qnetd-cacert.crt
[proxmox] Generate a certificate signing request on the first Proxmox node:
corosync-qdevice-net-certutil -r -n ${CLUSTER_NAME}
[docker] Copy the certificate signing request to the corosync config directory:
scp ${PROXMOX_NODE}:/etc/corosync/qdevice/net/nssdb/qdevice-net-node.crq \ ${QNETD_DATA}/qnetd/nssdb/
[docker] Sign the certificate from the corosync-qnetd container:
docker exec qnetd \ corosync-qnetd-certutil -s -n ${CLUSTER_NAME} \ -c /etc/corosync/qnetd/nssdb/qdevice-net-node.crq
[docker] Copy the newly generated certificate back to the first Proxmox node:
scp ${QNETD_DATA}/qnetd/nssdb/cluster-${CLUSTER_NAME}.crt \ ${PROXMOX_NODE}:/etc/pve/corosync/qdevice/net/nssdb/
[proxmox] Import the certificate on the first Proxmox node:
corosync-qdevice-net-certutil -M -c cluster-${CLUSTER_NAME}.crt
[proxmox] Copy the output qdevice-net-node.p12 to all other Proxmox nodes:
cp -v /etc/corosync/qdevice/net/nssdb/qdevice-net-node.p12 \ /etc/pve/corosync/qdevice/net/nssdb/
/etc/pve synchronization discussed in step 3.[proxmox] Set up all other Proxmox nodes
Repeat steps 4-6 above
Install the corosync-qdevice package
Start and enable the corosync-qdevice service
Initialize the corosync-qdevice certificate database
corosync-qdevice-net-certutil -i \ -c /etc/pve/corosync/qdevice/net/nssdb/qnetd-cacert.crt
Import the corosync cluster certificate and key:
corosync-qdevice-net-certutil -m \ -c /etc/pve/corosync/qdevice/net/nssdb/qdevice-net-node.p12
[proxmox] Add qdevice config to /etc/pve/corosync.conf on the first Proxmox node:
Edit /etc/pve/corosync.conf:
quorum { provider: corosync_votequorum }
${DOCKER_HOST} to the hostname or IP of your docker host):
quorum { provider: corosync_votequorum device { model: net votes: 1 net { tls: on host: ${DOCKER_HOST} algorithm: ffsplit } } }
See <[***]> for more info.
[proxmox] Restart the corosync-qdevice service on all Proxmox nodes:
systemctl restart corosync-qdevice
The number of connected clients should be equal to the number of proxmox nodes online.
This should work and be a bit quicker and easier than the above quick start guide, but it hasn't been tested and requires your proxmox nodes to be able to SSH into your docker host.
Your docker host must have an SSH server installed and the Proxmox node used in step 5 must be able to SSH into your docker server.
You will also need to set a few environment variables (or manually replace them with the appropriate values in the commands below):
CLUSTER_NAME: The name of your Proxmox cluster (must match the cluster_name key in
/etc/corosync/corosync.conf on your Proxmox nodes.
E.g.
CLUSTER_NAME=pm-cluster-1
DOCKER_HOST: The hostname or IP address of your docker host
E.g.
DOCKER_HOST=docker1
PROXMOX_NODES: The hostnames or IP addresses of your proxmox nodes
E.g.
PROXMOX_NODES=( proxmox1 proxmox2 )
On all Proxmox nodes, install the corosync-qdevice package:
apt-get install corosync-qdevice
On all Proxmox nodes, start and enable the corosync-qdevice service:
systemctl start corosync-qdevice systemctl enable corosync-qdevice
On the docker host, create and start the docker corosync-qnetd container:
docker run -d --name=qnetd --cap-drop=ALL -p 5403:5403 \ -v /etc/corosync:/etc/corosync modelrockettier/corosync-qnetd
/etc/corosync on the docker host.On the docker host, copy the QNetd tools into the $PATH:
sudo docker cp qnetd:/usr/bin/corosync-qnetd-tool /usr/local/bin/ sudo docker cp qnetd:/usr/bin/corosync-qnetd-certutil /usr/local/bin/
From a Proxmox node, run the Proxmox cluster qdevice setup
pvecm qdevice setup ${DOCKER_HOST}
corosync-qdevice-net-certutil quick setup may also work (again, this is untested).
corosync-qdevice-net-certutil -Q -n ${CLUSTER_NAME} ${DOCKER_HOST} ${PROXMOX_NODES[@]}
On the docker host, Ensure corosync-qnetd is working properly
The number of connected clients should be equal to the number of proxmox nodes online.
docker exec qnetd corosync-qnetd-tool -s
QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 2 Connected clusters: 1
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
MacOS OrbStack 容器配置
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务