
gluster/gluster-csi-driver该仓库包含Gluster的CSI(容器存储接口)驱动。容器存储接口(CSI)是一种行业标准,旨在实现集群级卷插件的标准化,使存储供应商(SP)只需开发一次插件,即可在多个容器编排(CO)系统中使用。
本仓库包含源代码和Dockerfile用于构建GlusterFS CSI驱动,采用多阶段容器构建方式,需使用较新版本的Docker或Buildah。
克隆仓库并进入目录:
bash[root@localhost]# git clone [***] [root@localhost]# cd gluster-csi-driver
运行构建脚本:
bash[root@localhost]# ./build.sh
使用CSI驱动前需完成以下环境准备:
注意:可通过GCS工具一键部署上述环境,无需分别安装。详情参见GCS部署指南。
bash[root@localhost]# cd examples/kubernetes/gluster-virtblock/ [root@localhost]# kubectl create -f csi-deployment.yaml service/csi-attacher-glustervirtblockplugin created statefulset.apps/csi-attacher-glustervirtblockplugin created daemonset.apps/csi-nodeplugin-glustervirtblockplugin created service/csi-provisioner-glustervirtblockplugin created statefulset.apps/csi-provisioner-glustervirtblockplugin created serviceaccount/glustervirtblock-csi created clusterrole.rbac.authorization.k8s.io/glustervirtblock-csi created clusterrolebinding.rbac.authorization.k8s.io/glustervirtblock-csi-role created
yaml# storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glustervirtblock-csi annotations: storageclass.kubernetes.io/is-default-class: "false" provisioner: org.gluster.glustervirtblock
bash[root@localhost]# kubectl create -f storage-class.yaml storageclass.storage.k8s.io/glustervirtblock-csi created
验证存储类创建:
bash[root@localhost]# kubectl get storageclass NAME PROVISIONER AGE glustervirtblock-csi org.gluster.glustervirtblock 6s
yaml# pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterblock-csi-pv spec: storageClassName: glustervirtblock-csi accessModes: - ReadWriteOnce resources: requests: storage: 100Mi
bash[root@localhost]# kubectl create -f pvc.yaml persistentvolumeclaim/glusterblock-csi-pv created
验证PVC状态:
bash[root@localhost]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE glusterblock-csi-pv Bound pvc-1048edfb-1f06-11e9-8b7a-525400491c42 100Mi RWO glustervirtblock-csi 8s
yaml# app.yaml apiVersion: v1 kind: Pod metadata: name: gluster-0 labels: app: gluster spec: containers: - name: gluster image: redis imagePullPolicy: IfNotPresent volumeMounts: - mountPath: "/mnt/gluster" name: glusterblockcsivol volumes: - name: glusterblockcsivol persistentVolumeClaim: claimName: glusterblock-csi-pv
bash[root@localhost]# kubectl create -f app.yaml pod/gluster-0 created
验证Pod状态及挂载:
bash[root@localhost]# kubectl get pods NAME READY STATUS RESTARTS AGE gluster-0 1/1 Running 0 38s [root@localhost]# kubectl exec -it gluster-0 -- mount | grep gluster /mnt/blockhostvol/block_hosting_volume_ddd7ced7-7766-4797-9214-01fa9587472a/pvc-1048edfb-1f06-11e9-8b7a-525400491c42 on /mnt/gluster type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
bash[root@localhost]# kubectl create -f csi-deployment.yaml service/csi-attacher-glusterfsplugin created statefulset.apps/csi-attacher-glusterfsplugin created daemonset.apps/csi-nodeplugin-glusterfsplugin created service/csi-provisioner-glusterfsplugin created statefulset.apps/csi-provisioner-glusterfsplugin created serviceaccount/glusterfs-csi created clusterrole.rbac.authorization.k8s.io/glusterfs-csi created clusterrolebinding.rbac.authorization.k8s.io/glusterfs-csi-role created
注意:Kubernetes v1.13.1需启用特性门控:
--feature-gates=VolumeSnapshotDataSource=true
yaml# storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-csi annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: org.gluster.glusterfs
bash[root@localhost]# kubectl create -f storage-class.yaml storageclass.storage.k8s.io/glusterfs-csi created
步骤与RWO类似,PVC访问模式指定为ReadWriteMany,详细操作参见RWO卷声明部分。
yaml# snapshot-class.yaml apiVersion: snapshot.storage.k8s.io/v1alpha1 kind: VolumeSnapshotClass metadata: name: glusterfs-csi-snap snapshotter: org.gluster.glusterfs
bash[root@localhost]# kubectl create -f snapshot-class.yaml volumesnapshotclass.snapshot.storage.k8s.io/glusterfs-csi-snap created
yaml# volume-snapshot.yaml apiVersion: snapshot.storage.k8s.io/v1alpha1 kind: VolumeSnapshot metadata: name: glusterfs-csi-ss spec: snapshotClassName: glusterfs-csi-snap source: name: glusterfs-csi-pv kind: PersistentVolumeClaim
bash[root@localhost]# kubectl create -f volume-snapshot.yaml volumesnapshot.snapshot.storage.k8s.io/glusterfs-csi-ss created
yaml# pvc-restore.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-pv-restore spec: storageClassName: glusterfs-csi dataSource: name: glusterfs-csi-ss kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteMany resources: requests: storage: 5Gi
bash[root@localhost]# kubectl create -f pvc-restore.yaml persistentvolumeclaim/glusterfs-pv-restore created
创建支持Thin Arbiter的存储类:
yaml# thin-arbiter-virtblock-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glustervirtblock-csi-thin-arbiter provisioner: org.gluster.glustervirtblock parameters: replicas: "2" arbiterType: "thin" arbiterPath: "192.168.122.121:/mnt/arbiter-path:24007"
使用loopback设备作为brick的存储类:
yaml# glusterfs-lite-storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-lite-csi provisioner: org.gluster.glusterfs parameters: brickType: "loop"

manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务