
instantlinux/mariadb-galera
MariaDB 12.x with automatic cluster generation under kubernetes / swarm using named volumes for data persistence. This has robust bootstrap logic based on MariaDB / Galera documentation for automated cluster create / join operations. Requires an etcd instance for sharing instance-health data across the cluster.
Define the following dependencies before launching the cluster: password for root, network load ***, and a dedicated etcd key-value store. Here's how:
Create a random root password:
SECRET=mysql-root-password PW=$(uuidgen | base64) cat >/dev/shm/new.yaml <<EOT --- apiVersion: v1 data: $SECRET: $PW kind: Secret metadata: name: $SECRET namespace: \$K8S_NAMESPACE type: Opaque EOT sekret enc /dev/shm/new.yaml >secrets/$SECRET rm /dev/shm/new.yaml
You can use a tool like sops or sekret to generate the secrets file.
Set any local my.cnf values in files under a volume mount for /etc/mysql/my.cnf.d (mapped as $ADMIN_PATH/mariadb/etc/). Use a ConfigMap when running under Kubernetes (example is included).
The container exposes ports 3306, 4444, 4567 and 4568 on the ingress network. An internal network is needed for cluster-sync traffic and/or backups (use mariabackup, or the mysqldump container provided here). In order to enable connections directly to each cluster member for write-safe access or troubleshooting, if you're running a recent version of Docker you can override its ingress load-*** thus:
version: "3.2" services: db: ... ports: - target: 3306 published: <port> protocol: tcp mode: host
You almost definitely want a separate load-*** for serving your published port. This method is defined and documented here in kubernetes.yaml.
With MariaDB technology, write performance is limited to I/O throughput of the slowest single node in the cluster. Read performance can be scaled across the full cluster and is limited only by network capacity.
If you set up a cluster and spread database write traffic across all nodes, performance will be worse than with a single cluster because of issues described in multi-master conflicts. Your logs will have messages like these:
WSREP: MDL conflict db=jira7 table=rundetails ticket=6 solved by abort
and the cluster won't provide stable performance. To make this long story short, here are the steps to take:
For Docker Swarm users, this exercise is left to the reader. For Kubernetes, the kubernetes.yaml and Makefile provided here will automate these steps once you've set up the two DNS entries.
Logs are sent to stdout / stderr with one exception: the slow query log. Add a volume mount of /var/log/mysql if you want to preserve that log.
See the k8s/Makefile for a make etcd to start etcd under kubernetes. A docker-compose service definition is available at docker-tools/services/etcd. Instructions for using the free discovery.etc.io bootstrap service are given there.
This repo has complete instructions for building a kubernetes cluster where you can launch with helm or kubernetes.yaml using make and customizing Makefile.vars after cloning this repo:
git clone [***] cd docker-tools/k8s # This make target is defined in Makefile.instances make db00
When taking the database down, wait for all pods to stop, and then clear etcd entries for the cluster:
CLUSTER=db00 ETCD_HOST=10.101.1.19 etcdctl --endpoints=$ETCD_HOST:2379 del --prefix /galera/$CLUSTER
Then launch with the helm chart or docker-compose.
This was originally developed under docker Swarm. A docker-compose file is a legacy of that original work. Before stack-deploying it, invoke docker secret create to generate the secret mysql-root-password, and define an ADMIN_PATH environment variable pointing to your my.cnf (it has to be in the same location on each docker node).
| Variable | Default | Description |
|---|---|---|
| CLUSTER_JOIN | join address--usually not needed | |
| CLUSTER_NAME | cluster01 | cluster name |
| CLUSTER_SIZE | 3 | expected number of nodes |
| DISCOVERY_SERVICE | etcd:2379 | etcd host list, e.g. etcd1:2379,etcd2:2379 |
| LOG_LEVEL | info | set to debug for additional logging |
| REINSTALL_OK | set to any value to enable reinstall over old volume | |
| ROOT_SECNAME | mysql-root-password | name of secret for password |
| TTL | 10 | longevity (in seconds) of keys posted to etcd |
| TZ | UTC | timezone |
When creating this image (in early 2017), DB clustering under Docker Swarm was still in its infancy and I could not find a clustering solution that would automatically restart without problems (like split-brain, or just never coming up) upon a simple "docker stack deploy ; docker stack rm ; docker stack deploy" repeated test cycle. This addresses that problem, using a minimal distro (tried Alpine Linux, wound up having to use debian). I like MariaDB better than MySQL / Percona solutions, after a few years of running MariaDB and a decade+ of running MySQL. A couple years later, there's still no better alternative.
Galera is finicky upon restarts so it requires a fair amount of logic to handle edge cases.
This container image is intended to be run in a 3-, 5-node, or larger configuration. It requires a stable etcd configuration for node discovery and master election at restart. A single instance can be invoked without HA resources using kubernetes-single.yaml.
There is no supported etcd3 library for python3 (as of Oct 2025). For now, this is using python-etcd3 0.12.0, last updated in 2020, with PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION set for compatibility.
Thanks to ashraf-s9s of severalnines for the healthcheck script.
If you want to make improvements to this image, see CONTRIBUTING.

manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务