暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

Kubernetes使用Glusterfs做存储持久化

耶喝运维 2020-05-28
542

Glusterfs集群的安装

环境介绍

OS系统:Centos 7.x

Glusterfs两个节点:192.168.22.21,192.168.22.22

安装glusterfs

我们直接在物理机上使用yum安装,如果你选择在kubernetes上安装,请参考:https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md

先安装 gluster 源

  1. [root@k8s-glusterfs-01 ~]# yum install centos-release-gluster -y

安装 glusterfs 组件

  1. [root@k8s-glusterfs-01 ~]# yum install -y glusterfs glusterfs-server

  2. glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel

创建 glusterfs 目录

  1. [root@k8s-glusterfs-01 ~]# mkdir /opt/glusterd

修改 glusterd 目录

  1. [root@k8s-glusterfs-01 ~]# sed -i 's/var\/lib/opt/g' /etc/glusterfs/glusterd.vol

启动 glusterfs

  1. [root@k8s-glusterfs-01 ~]# systemctl start glusterd.service

设置开机启动

  1. [root@k8s-glusterfs-01 ~]# systemctl enable glusterd.service

查看状态

  1. [root@k8s-glusterfs-01 ~]# systemctl status glusterd.service

配置 glusterfs

  1. [root@k8s-glusterfs-01 ~]# vi /etc/hosts

  2. 192.168.22.21 k8s-glusterfs-01

  3. 192.168.22.22 k8s-glusterfs-02

创建存储目录

  1. [root@k8s-glusterfs-01 ~]# mkdir /opt/gfs_data

添加节点到 集群 执行操作的本机不需要probe 本机

  1. [root@k8s-master-01 ~]# gluster peer probe k8s-glusterfs-02

查看集群状态

  1. [root@k8s-glusterfs-01 ~]# gluster peer status

  2. Number of Peers: 1

  3. Hostname: k8s-glusterfs-02

  4. Uuid: b80f012b-cbb6-469f-b302-0722c058ad45

  5. State: Peer in Cluster (Connected)

配置 volume

GlusterFS 几种volume 模式说明

1)、默认模式,既DHT, 也叫 分布卷: 将文件已hash算法随机分布到 一台服务器节点中存储。命令格式:gluster volume create test-volume server1:/exp1 server2:/exp2

2)、复制模式,既AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2

3)、条带模式,既Striped, 创建volume 时带 stripe x 数量:将文件切割成数据块,分别存储到 stripe x 个节点中 ( 类似raid 0 )。命令格式:gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2

4)、分布式条带模式(组合型),最少需要4台服务器才能创建。创建volume 时 stripe 2 server = 4 个节点:是DHT 与 Striped 的组合型。命令格式:gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4

5)、分布式复制模式(组合型), 最少需要4台服务器才能创建。创建volume 时 replica 2 server = 4 个节点:是DHT 与 AFR 的组合型。命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4

6)、条带复制卷模式(组合型), 最少需要4台服务器才能创建。创建volume 时 stripe 2 replica 2 server = 4 个节点:是 Striped 与 AFR 的组合型。命令格式:gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4

7)、三种模式混合, 至少需要8台 服务器才能创建。stripe 2 replica 2 , 每4个节点 组成一个 组。命令格式:gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8

创建GlusterFS磁盘

  1. [root@k8s-glusterfs-01 ~]# gluster volume create models replica 2 k8s-glusterfs-01:/opt/gfs_data k8s-glusterfs-02:/opt/gfs_data force

  2. volume create: models: success: please start the volume to access data

查看volume状态

  1. [root@k8s-glusterfs-01 ~]# gluster volume info

  2. Volume Name: k8s-volume

  3. Type: Distribute

  4. Volume ID: 340d94ee-7c3d-451d-92c9-ad0e19d24b7d

  5. Status: Created Snapshot

  6. Count: 0 Number of Bricks: 1 x 2 = 2

  7. Transport-type: tcp

  8. Bricks:

  9. Brick1: k8s-glusterfs-01:/opt/gfs_data

  10. Brick2: k8s-glusterfs-02:/opt/gfs_data

  11. Options Reconfigured:

  12. transport.address-family:inet

  13. nfs.disable: on

  14. performance.client-io-threads: off

启动volume

  1. gluster volume start k8s-volume

Glusterfs调优

开启指定 volume 的配额

  1. gluster volume quota k8s-volume enable

限制指定 volume 的配额

  1. gluster volume quota k8s-volume limit-usage / 1TB

设置 cache 大小, 默认32MB

  1. gluster volume set k8s-volume performance.cache-size 4GB

设置 io 线程, 太大会导致进程崩溃

  1. gluster volume set k8s-volume performance.io-thread-count 16

设置 网络检测时间, 默认42s

  1. gluster volume set k8s-volume network.ping-timeout 10

设置 写缓冲区的大小, 默认1M

  1. gluster volume set k8s-volume performance.write-behind-window-size 1024MB

客户端使用Glusterfs,(静态)

物理机上使用Gluster的volume

  1. [root@k8s-master-01 ~]# yum install -y glusterfs glusterfs-fuse

  2. [root@k8s-master-01 ~]# mkdir -p /opt/gfsmnt

  3. [root@k8s-master-01 ~]# mount -t glusterfs k8s-glusterfs-01:k8s-volume /opt/gfsmnt/

df查看挂载状态:

  1. [root@k8s-master-01 ~]# df -h |grep k8s-volume k8s-glusterfs-01:k8s-volume

  2. 46G 1.6G 44G 4% /opt/gfsmnt

Kubernetes配置使用glusterfs

  1. 官方文档对配置过程进行了介绍:https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md

注:以下操作在kubernetes集群中任意一个可以执行kubectl的master上操作!

第一步在Kubernetes中创建GlusterFS端点定义 这是glusterfs-endpoints.json的片段:

  1. "{ "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "192.168.22.21" } ], "ports": [ { "port": 20 } ] }, { "addresses": [ { "ip": "192.168.22.22" } ], "ports": [ { "port": 20 } ] } ] }

备注:该subsets字段应填充GlusterFS集群中节点的地址。可以在port字段中提供任何有效值(从1到65535)。

创建端点

  1. [root@k8s-master-01 ~]# kubectl create -f glusterfs-endpoints.json

验证是否已成功创建端点

  1. [root@k8s-master-01 ~]# kubectl get ep |grep glusterfs-cluster glusterfs-cluster 192.168.22.21:20,192.168.22.22:20

配置 service 我们还需要为这些端点创建服务,以便它们能够持久存在。我们将在没有选择器的情况下添加此服务,以告知Kubernetes我们想要手动添加其端点

  1. [root@k8s-master-01 ]# cat glusterfs-service.json

  2. { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 20} ] } }

创建服务

  1. [root@k8s-master-01 ]# kubectl create -f glusterfs-service.json

查看service

  1. [root@k8s-master-01 ]# kubectl get service | grep glusterfs-cluster glusterfs-cluster

  2. ClusterIP 10.68.114.26 <none> 20/TCP 6m

配置PersistentVolume(简称pv) 创建glusterfs-pv.yaml文件,指定storage容量和读写属性

  1. apiVersion: v1

  2. kind: PersistentVolume

  3. metadata:

  4. name: pv001

  5. spec:

  6. capacity:

  7. storage: 10Gi

  8. accessModes:

  9. - ReadWriteMany

  10. glusterfs:

  11. endpoints: "glusterfs-cluster"

  12. path: "k8s-volume"

  13. readOnly: false

然后执行:

  1. [root@k8s-master-01 ~]# kubectl create -f glusterfs-pv.yaml

  2. kubectl get pv

  3. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

  4. pv001 10Gi RWX Retain Bound


  5. ##配置PersistentVolumeClaim(简称pvc)

  6. ##创建glusterfs-pvc.yaml文件,指定请求资源大小apiVersion: v1

  7. kind: PersistentVolumeClaim

  8. metadata:

  9. name: pvc001

  10. spec:

  11. accessModes:

  12. - ReadWriteMany

  13. resources:

  14. requests:

  15. storage: 2Gi

执行:

  1. [root@k8s-master-01 ~]# kubectl create -f glusterfs-pvc.yaml

  2. [root@k8s-master-01 ~]# kubectl get pvc

  3. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

  4. pvc001 Bound pv001 10Gi RWX 1h

部署应用挂载pvc 以创建nginx,把pvc挂载到容器内的/usr/share/nginx/html文件夹为例:

nginx_deployment.yaml文件如下

  1. apiVersion: extensions/v1beta1

  2. kind: Deployment

  3. metadata:

  4. name: nginx-dm

  5. spec:

  6. replicas: 2

  7. template:

  8. metadata:

  9. labels:

  10. name: nginx

  11. spec:

  12. containers:

  13. - name: nginx

  14. image: nginx

  15. ports:

  16. - containerPort: 80

  17. volumeMounts:

  18. - name: storage001

  19. mountPath: "/usr/share/nginx/html"

  20. volumes:

  21. - name: storage001

  22. persistentVolumeClaim:

  23. claimName: pvc001

执行:

  1. [root@k8s-master-01 ~]# kubectl create -f nginx_deployment.yaml

查看nginx是否部署成功

  1. [root@k8s-master-01 ~]# kubectl get pods

  2. NAME READY STATUS RESTARTS AGE

  3. nginx-dm-5fbdb54795-77f7v 1/1 Running 0 1h

  4. nginx-dm-5fbdb54795-rnqwd 1/1 Running 0 1h

查看挂载:

  1. [root@k8s-master-01 ~]# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- df -h |grep k8s-volume

  2. 192.168.22.21:k8s-volume 46G 1.6G 44G 4% /usr/share/nginx/html

创建文件:

  1. [root@k8s-master-01 ~]# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- touch /usr/share/nginx/html/123.txt

查看文件属性:

  1. [root@k8s-master-01 ~]# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- ls -lt /usr/share/nginx/html/123.txt

  2. -rw-r--r-- 1 root root 0 Jul 9 06:25 /usr/share/nginx/html/123.txt

再回到glusterfs的服务器的数据目录/opt/gfs_data查看是否有123.txt文件

192.168.22.21上查看:

  1. [root@k8s-glusterfs-01 ~]# ls -lt /opt/gfs_data/

  2. 总用量 0

  3. -rw-r--r-- 2 root root 0 7 9 14:25 123.txt

192.168.22.22上查看:

  1. [root@k8s-glusterfs-02 ~]# ls -lt /opt/gfs_data/

  2. 总用量 0

  3. -rw-r--r-- 2 root root 0 7 9 14:25 123.txt

至此部署完成。

客户端使用Glusterfs,(动态)

  1. 部署heketi

  2. 简介

  3. Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期。 通过Heketi,就可以像使用OpenStack ManilaKubernetesOpenShift一样申请可以动态配置GlusterFS卷。Heketi会动态在集群内选择bricks构建所需的volumes,这样以确保数据的副本会分散到集群不同的故障域内。同时Heketi还支持任意数量的ClusterFS集群,以保证接入的云服务器不局限于单个GlusterFS集群。

heketi项目地址:https://github.com/heketi/heketi

下载heketi相关包:https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-client-v5.0.1.linux.amd64.tar.gz

https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz

  1. ......

  2. #修改端口,防止端口冲突

  3. "port": "18080",

  4. ......

  5. #允许认证

  6. "use_auth": true,

  7. ......

  8. #admin用户的key改为adminkey

  9. "key": "adminkey"

  10. ......

  11. #修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上

  12. "executor": "ssh",

  13. "sshexec": {

  14. "keyfile": "/root/.ssh/id_rsa",

  15. "user": "root",

  16. "port": "22",

  17. "fstab": "/etc/fstab"

  18. },

  19. ......

  20. # 定义heketi数据库文件位置

  21. "db": "/var/lib/heketi/heketi.db"

  22. ......

  23. #调整日志输出级别

  24. "loglevel" : "warning"

需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式。

配置ssh密钥

在上面我们配置heketi的时候使用了ssh的executor,那么就需要heketi服务器能通过ssh密钥的方式连接到所有glusterfs节点进行管理操作,所以需要先生成ssh密钥

  1. ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ''

  2. chmod 600 /etc/heketi/heketi_key.pub

  3. # ssh公钥传递,这里只以一个节点为例

  4. ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.75.175

  5. # 验证是否能通过ssh密钥正常连接到glusterfs节点

  6. ssh -i /etc/heketi/heketi_key root@192.168.75.175

编辑开机自启动文件

  1. vim /etc/init.d/heketi



  2. #!/bin/bash



  3. #chkconfig:2345 20 90

  4. #description:heketi

  5. #processname:heketi

  6. case $1 in

  7. start)

  8. nohup /usr/local/bin/heketi --config=/etc/heketi/heketi.json &

  9. ;;

  10. stop)

  11. kill `pidof heketi`

  12. ;;

  13. status)

  14. ps -ef | grep -v grep | grep heketi

  15. ;;

  16. restart)

  17. kill `pidof heketi`

  18. nohup /usr/local/bin/heketi --config=/etc/heketi/heketi.json &

  19. ;;

  20. *)

  21. echo "require start|stop|status|restart"

  22. ;;

  23. esac

设置开机自启动

  1. systemctl enable heketi

  2. systemctl start heketi

  3. 编辑配置文件,注意必须是裸盘,会自动创建lvm


  4. {

  5. "clusters": [

  6. {

  7. "nodes": [

  8. {

  9. "node": {

  10. "hostnames": {

  11. "manage": [

  12. "192.168.75.175"

  13. ],

  14. "storage": [

  15. "192.168.75.175"

  16. ]

  17. },

  18. "zone": 1

  19. },

  20. "devices": [

  21. "/dev/vda2"

  22. ]

  23. },

  24. {

  25. "node": {

  26. "hostnames": {

  27. "manage": [

  28. "192.168.75.176"

  29. ],

  30. "storage": [

  31. "192.168.75.176"

  32. ]

  33. },

  34. "zone": 1

  35. },

  36. "devices": [

  37. "/dev/vda2"

  38. ]

  39. },

  40. {

  41. "node": {

  42. "hostnames": {

  43. "manage": [

  44. "192.168.75.177"

  45. ],

  46. "storage": [

  47. "192.168.75.177"

  48. ]

  49. },

  50. "zone": 1

  51. },

  52. "devices": [

  53. "/dev/vda2"

  54. ]

  55. },

  56. {

  57. "node": {

  58. "hostnames": {

  59. "manage": [

  60. "192.168.75.178"

  61. ],

  62. "storage": [

  63. "192.168.75.178"

  64. ]

  65. },

  66. "zone": 1

  67. },

  68. "devices": [

  69. "/dev/vda2"

  70. ]

  71. }

  72. ]

  73. }

  74. ]

  75. }

创建

  1. heketi-cli topology load --json topology-sample.json

配置kubernetes使用glusterfs

参考https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

创建storageclass

添加storageclass-glusterfs.yaml文件,内容如下:

  1. apiVersion: storage.k8s.io/v1beta1

  2. kind: StorageClass

  3. metadata:

  4. name: glusterfs

  5. provisioner: kubernetes.io/glusterfs

  6. parameters:

  7. resturl: "http://192.168.75.175:18080"

  8. restauthenabled: "true"

  9. restuser: "admin"

  10. restuserkey: "adminkey"

  11. volumetype: "replicate:2"


  12. kubectl apply -f storageclass-glusterfs.yaml

  13. 这是直接将userkey明文写入配置文件创建storageclass的方式,官方推荐将key使用secret保存。示例如下:

glusterfs-secret.yaml内容如下:

  1. apiVersion: v1

  2. kind: Secret

  3. metadata:

  4. name: heketi-secret

  5. namespace: default

  6. data:

  7. # base64 encoded password. E.g.: echo -n "mypassword" | base64

  8. key: TFRTTkd6TlZJOEpjUndZNg==

  9. type: kubernetes.io/glusterfs

storageclass-glusterfs.yaml内容修改如下:

  1. apiVersion: storage.k8s.io/v1beta1

  2. kind: StorageClass

  3. metadata:

  4. name: glusterfs

  5. provisioner: kubernetes.io/glusterfs

  6. parameters:

  7. resturl: "http://10.1.61.175:18080"

  8. clusterid: "dae1ab512dfad0001c3911850cecbd61"

  9. restauthenabled: "true"

  10. restuser: "admin"

  11. secretNamespace: "default"

  12. secretName: "heketi-secret"

  13. #restuserkey: "adminkey"

  14. gidMin: "40000"

  15. gidMax: "50000"

  16. volumetype: "replicate:2"

更详细的用法参考:https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs

创建pvc

glusterfs-pvc.yaml内容如下:

  1. kind: PersistentVolumeClaim

  2. apiVersion: v1

  3. metadata:

  4. name: glusterfs-mysql1

  5. namespace: default

  6. annotations:

  7. volume.beta.kubernetes.io/storage-class: "glusterfs"

  8. spec:

  9. accessModes:

  10. - ReadWriteMany

  11. resources:

  12. requests:

  13. storage: 2Gi


  14. kubectl create -f glusterfs-pvc.yaml

创建pod,使用pvc

mysql-deployment.yaml内容如下:

  1. kind: Deployment

  2. apiVersion: extensions/v1beta1

  3. metadata:

  4. name: mysql

  5. namespace: default

  6. spec:

  7. replicas: 1

  8. template:

  9. metadata:

  10. labels:

  11. name: mysql

  12. spec:

  13. containers:

  14. - name: mysql

  15. image: mysql:5.7

  16. imagePullPolicy: IfNotPresent

  17. env:

  18. - name: MYSQL_ROOT_PASSWORD

  19. value: root123456

  20. ports:

  21. - containerPort: 3306

  22. volumeMounts:

  23. - name: gluster-mysql-data

  24. mountPath: "/var/lib/mysql"

  25. volumes:

  26. - name: gluster-mysql-data

  27. persistentVolumeClaim:

  28. claimName: glusterfs-mysql1


  29. kubectl create -f /etc/kubernetes/mysql-deployment.yaml


文章转载自耶喝运维,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论