Glusterfs集群的安装
环境介绍
OS系统:Centos 7.x
Glusterfs两个节点:192.168.22.21,192.168.22.22
安装glusterfs
我们直接在物理机上使用yum安装,如果你选择在kubernetes上安装,请参考:https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md
先安装 gluster 源
[root@k8s-glusterfs-01 ~]# yum install centos-release-gluster -y
安装 glusterfs 组件
[root@k8s-glusterfs-01 ~]# yum install -y glusterfs glusterfs-server
glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel
创建 glusterfs 目录
[root@k8s-glusterfs-01 ~]# mkdir /opt/glusterd
修改 glusterd 目录
[root@k8s-glusterfs-01 ~]# sed -i 's/var\/lib/opt/g' /etc/glusterfs/glusterd.vol
启动 glusterfs
[root@k8s-glusterfs-01 ~]# systemctl start glusterd.service
设置开机启动
[root@k8s-glusterfs-01 ~]# systemctl enable glusterd.service
查看状态
[root@k8s-glusterfs-01 ~]# systemctl status glusterd.service
配置 glusterfs
[root@k8s-glusterfs-01 ~]# vi /etc/hosts
192.168.22.21 k8s-glusterfs-01
192.168.22.22 k8s-glusterfs-02
创建存储目录
[root@k8s-glusterfs-01 ~]# mkdir /opt/gfs_data
添加节点到 集群 执行操作的本机不需要probe 本机
[root@k8s-master-01 ~]# gluster peer probe k8s-glusterfs-02
查看集群状态
[root@k8s-glusterfs-01 ~]# gluster peer status
Number of Peers: 1
Hostname: k8s-glusterfs-02
Uuid: b80f012b-cbb6-469f-b302-0722c058ad45
State: Peer in Cluster (Connected)
配置 volume
GlusterFS 几种volume 模式说明
1)、默认模式,既DHT, 也叫 分布卷: 将文件已hash算法随机分布到 一台服务器节点中存储。命令格式:gluster volume create test-volume server1:/exp1 server2:/exp2
2)、复制模式,既AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
3)、条带模式,既Striped, 创建volume 时带 stripe x 数量:将文件切割成数据块,分别存储到 stripe x 个节点中 ( 类似raid 0 )。命令格式:gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
4)、分布式条带模式(组合型),最少需要4台服务器才能创建。创建volume 时 stripe 2 server = 4 个节点:是DHT 与 Striped 的组合型。命令格式:gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
5)、分布式复制模式(组合型), 最少需要4台服务器才能创建。创建volume 时 replica 2 server = 4 个节点:是DHT 与 AFR 的组合型。命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
6)、条带复制卷模式(组合型), 最少需要4台服务器才能创建。创建volume 时 stripe 2 replica 2 server = 4 个节点:是 Striped 与 AFR 的组合型。命令格式:gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
7)、三种模式混合, 至少需要8台 服务器才能创建。stripe 2 replica 2 , 每4个节点 组成一个 组。命令格式:gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
创建GlusterFS磁盘
[root@k8s-glusterfs-01 ~]# gluster volume create models replica 2 k8s-glusterfs-01:/opt/gfs_data k8s-glusterfs-02:/opt/gfs_data force
volume create: models: success: please start the volume to access data
查看volume状态
[root@k8s-glusterfs-01 ~]# gluster volume info
Volume Name: k8s-volume
Type: Distribute
Volume ID: 340d94ee-7c3d-451d-92c9-ad0e19d24b7d
Status: Created Snapshot
Count: 0 Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: k8s-glusterfs-01:/opt/gfs_data
Brick2: k8s-glusterfs-02:/opt/gfs_data
Options Reconfigured:
transport.address-family:inet
nfs.disable: on
performance.client-io-threads: off
启动volume
gluster volume start k8s-volume
Glusterfs调优
开启指定 volume 的配额
gluster volume quota k8s-volume enable
限制指定 volume 的配额
gluster volume quota k8s-volume limit-usage / 1TB
设置 cache 大小, 默认32MB
gluster volume set k8s-volume performance.cache-size 4GB
设置 io 线程, 太大会导致进程崩溃
gluster volume set k8s-volume performance.io-thread-count 16
设置 网络检测时间, 默认42s
gluster volume set k8s-volume network.ping-timeout 10
设置 写缓冲区的大小, 默认1M
gluster volume set k8s-volume performance.write-behind-window-size 1024MB
客户端使用Glusterfs,(静态)
物理机上使用Gluster的volume
[root@k8s-master-01 ~]# yum install -y glusterfs glusterfs-fuse
[root@k8s-master-01 ~]# mkdir -p /opt/gfsmnt
[root@k8s-master-01 ~]# mount -t glusterfs k8s-glusterfs-01:k8s-volume /opt/gfsmnt/
df查看挂载状态:
[root@k8s-master-01 ~]# df -h |grep k8s-volume k8s-glusterfs-01:k8s-volume
46G 1.6G 44G 4% /opt/gfsmnt
Kubernetes配置使用glusterfs
官方文档对配置过程进行了介绍:https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md
注:以下操作在kubernetes集群中任意一个可以执行kubectl的master上操作!
第一步在Kubernetes中创建GlusterFS端点定义 这是glusterfs-endpoints.json的片段:
"{ "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "192.168.22.21" } ], "ports": [ { "port": 20 } ] }, { "addresses": [ { "ip": "192.168.22.22" } ], "ports": [ { "port": 20 } ] } ] }
备注:该subsets字段应填充GlusterFS集群中节点的地址。可以在port字段中提供任何有效值(从1到65535)。
创建端点
[root@k8s-master-01 ~]# kubectl create -f glusterfs-endpoints.json
验证是否已成功创建端点
[root@k8s-master-01 ~]# kubectl get ep |grep glusterfs-cluster glusterfs-cluster 192.168.22.21:20,192.168.22.22:20
配置 service 我们还需要为这些端点创建服务,以便它们能够持久存在。我们将在没有选择器的情况下添加此服务,以告知Kubernetes我们想要手动添加其端点
[root@k8s-master-01 ]# cat glusterfs-service.json
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 20} ] } }
创建服务
[root@k8s-master-01 ]# kubectl create -f glusterfs-service.json
查看service
[root@k8s-master-01 ]# kubectl get service | grep glusterfs-cluster glusterfs-cluster
ClusterIP 10.68.114.26 <none> 20/TCP 6m
配置PersistentVolume(简称pv) 创建glusterfs-pv.yaml文件,指定storage容量和读写属性
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "k8s-volume"
readOnly: false
然后执行:
[root@k8s-master-01 ~]# kubectl create -f glusterfs-pv.yaml
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 10Gi RWX Retain Bound
##配置PersistentVolumeClaim(简称pvc)
##创建glusterfs-pvc.yaml文件,指定请求资源大小apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc001
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
执行:
[root@k8s-master-01 ~]# kubectl create -f glusterfs-pvc.yaml
[root@k8s-master-01 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc001 Bound pv001 10Gi RWX 1h
部署应用挂载pvc 以创建nginx,把pvc挂载到容器内的/usr/share/nginx/html文件夹为例:
nginx_deployment.yaml文件如下
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-dm
spec:
replicas: 2
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: storage001
mountPath: "/usr/share/nginx/html"
volumes:
- name: storage001
persistentVolumeClaim:
claimName: pvc001
执行:
[root@k8s-master-01 ~]# kubectl create -f nginx_deployment.yaml
查看nginx是否部署成功
[root@k8s-master-01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dm-5fbdb54795-77f7v 1/1 Running 0 1h
nginx-dm-5fbdb54795-rnqwd 1/1 Running 0 1h
查看挂载:
[root@k8s-master-01 ~]# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- df -h |grep k8s-volume
192.168.22.21:k8s-volume 46G 1.6G 44G 4% /usr/share/nginx/html
创建文件:
[root@k8s-master-01 ~]# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- touch /usr/share/nginx/html/123.txt
查看文件属性:
[root@k8s-master-01 ~]# kubectl exec -it nginx-dm-5fbdb54795-77f7v -- ls -lt /usr/share/nginx/html/123.txt
-rw-r--r-- 1 root root 0 Jul 9 06:25 /usr/share/nginx/html/123.txt
再回到glusterfs的服务器的数据目录/opt/gfs_data查看是否有123.txt文件
192.168.22.21上查看:
[root@k8s-glusterfs-01 ~]# ls -lt /opt/gfs_data/
总用量 0
-rw-r--r-- 2 root root 0 7月 9 14:25 123.txt
192.168.22.22上查看:
[root@k8s-glusterfs-02 ~]# ls -lt /opt/gfs_data/
总用量 0
-rw-r--r-- 2 root root 0 7月 9 14:25 123.txt
至此部署完成。
客户端使用Glusterfs,(动态)
部署heketi
简介
Heketi提供了一个RESTful管理界面,可以用来管理GlusterFS卷的生命周期。 通过Heketi,就可以像使用OpenStack Manila,Kubernetes和OpenShift一样申请可以动态配置GlusterFS卷。Heketi会动态在集群内选择bricks构建所需的volumes,这样以确保数据的副本会分散到集群不同的故障域内。同时Heketi还支持任意数量的ClusterFS集群,以保证接入的云服务器不局限于单个GlusterFS集群。
heketi项目地址:https://github.com/heketi/heketi
下载heketi相关包:https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-client-v5.0.1.linux.amd64.tar.gz
https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz
......
#修改端口,防止端口冲突
"port": "18080",
......
#允许认证
"use_auth": true,
......
#admin用户的key改为adminkey
"key": "adminkey"
......
#修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登陆,使用ssh-copy-id把pub key拷到每台glusterfs服务器上
"executor": "ssh",
"sshexec": {
"keyfile": "/root/.ssh/id_rsa",
"user": "root",
"port": "22",
"fstab": "/etc/fstab"
},
......
# 定义heketi数据库文件位置
"db": "/var/lib/heketi/heketi.db"
......
#调整日志输出级别
"loglevel" : "warning"
需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式。
配置ssh密钥
在上面我们配置heketi的时候使用了ssh的executor,那么就需要heketi服务器能通过ssh密钥的方式连接到所有glusterfs节点进行管理操作,所以需要先生成ssh密钥
ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ''
chmod 600 /etc/heketi/heketi_key.pub
# ssh公钥传递,这里只以一个节点为例
ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.75.175
# 验证是否能通过ssh密钥正常连接到glusterfs节点
ssh -i /etc/heketi/heketi_key root@192.168.75.175
编辑开机自启动文件
vim /etc/init.d/heketi
#!/bin/bash
#chkconfig:2345 20 90
#description:heketi
#processname:heketi
case $1 in
start)
nohup /usr/local/bin/heketi --config=/etc/heketi/heketi.json &
;;
stop)
kill `pidof heketi`
;;
status)
ps -ef | grep -v grep | grep heketi
;;
restart)
kill `pidof heketi`
nohup /usr/local/bin/heketi --config=/etc/heketi/heketi.json &
;;
*)
echo "require start|stop|status|restart"
;;
esac
设置开机自启动
systemctl enable heketi
systemctl start heketi
编辑配置文件,注意必须是裸盘,会自动创建lvm,
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"192.168.75.175"
],
"storage": [
"192.168.75.175"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.176"
],
"storage": [
"192.168.75.176"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.177"
],
"storage": [
"192.168.75.177"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.178"
],
"storage": [
"192.168.75.178"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
}
]
}
]
}
创建
heketi-cli topology load --json topology-sample.json
配置kubernetes使用glusterfs
参考https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
创建storageclass
添加storageclass-glusterfs.yaml文件,内容如下:
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://192.168.75.175:18080"
restauthenabled: "true"
restuser: "admin"
restuserkey: "adminkey"
volumetype: "replicate:2"
kubectl apply -f storageclass-glusterfs.yaml
这是直接将userkey明文写入配置文件创建storageclass的方式,官方推荐将key使用secret保存。示例如下:
glusterfs-secret.yaml内容如下:
apiVersion: v1
kind: Secret
metadata:
name: heketi-secret
namespace: default
data:
# base64 encoded password. E.g.: echo -n "mypassword" | base64
key: TFRTTkd6TlZJOEpjUndZNg==
type: kubernetes.io/glusterfs
storageclass-glusterfs.yaml内容修改如下:
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.1.61.175:18080"
clusterid: "dae1ab512dfad0001c3911850cecbd61"
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
#restuserkey: "adminkey"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:2"
更详细的用法参考:https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs
创建pvc
glusterfs-pvc.yaml内容如下:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-mysql1
namespace: default
annotations:
volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
kubectl create -f glusterfs-pvc.yaml
创建pod,使用pvc
mysql-deployment.yaml内容如下:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mysql
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root123456
ports:
- containerPort: 3306
volumeMounts:
- name: gluster-mysql-data
mountPath: "/var/lib/mysql"
volumes:
- name: gluster-mysql-data
persistentVolumeClaim:
claimName: glusterfs-mysql1
kubectl create -f /etc/kubernetes/mysql-deployment.yaml




