暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

kubernetes 1.15 快速部署体验

孤岛鱼夫 2019-11-22
489

kubeadm 安装k8s 1.15

主机分配

  • 192.168.235.129 :k8s-master

  • 192.168.235.130 :k8s-node-1

  • 192.168.235.131 : k8s-node-2

各节点主机设置

以下操作在各节点操作

  • 关闭防火墙

  1. systemctl stop firewalld

  2. systemctl disable firewalld

  • 关闭selinux

  1. sed -i 's/enforcing/disabled/' /etc/selinux/config

  2. setenforce 0

  • 关闭sawp

  1. swapoff -a #临时

  2. vim /etc/fstab #将开机挂载列表中的swap分区删除永久禁用

  • 将桥接的IPv4流量传递到iptables的链

  1. cat > /etc/sysctl.d/k8s.conf << EOF

  2. net.bridge.bridge-nf-call-ip6tables = 1

  3. net.bridge.bridge-nf-call-iptables = 1

  4. EOF


  5. sysctl --system # 执行生效

  • 所有节点安装Docker

  1. systemctl enable docker && systemctl start docker

  2. docker --version

  3. Docker version 19.03.4

所有节点安装kubeadm/kubelet/kubectl/kubectl

  • 添加阿里云的k8s YUM源

  1. cat > /etc/yum.repos.d/kubernetes.repo << EOF

  2. [kubernetes]

  3. name=Kubernetes

  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

  5. enabled=1

  6. gpgcheck=0

  7. repo_gpgcheck=0

  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

  9. https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

  10. EOF

安装kubeadm,kubelet,kubectl

由于版本更新频繁,这里指定版本号部署:

  1. yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0


  2. systemctl enable kubelet

部署kubernetes Master

在master(192.168.235.129)节点执行

  1. kubeadm init \

  2. --apiserver-advertise-address=192.168.235.129 \

  3. --image-repository registry.aliyuncs.com/google_containers \

  4. --kubernetes-version v1.15.0 \

  5. --service-cidr=10.1.0.0/16 \

  6. --pod-network-cidr=10.244.0.0/16

执行初始化后要等待一段时间,拉取镜像要花点时间

  • --apiserver-advertise-address : 向集群通报api-server地址

  • --image-repository :集群所需的镜像,默认会从google镜像仓库拉取,被墙,这里要配置成阿里的镜像地址

  • --kubernetes-version :指定kubernetes版本

  • --service-cidr=10.1.0.0/16 :指定 service地址范围

  • --pod-network-cidr :指定容器的 IP范围

初始化完成后执行提示命令

  1. mkdir -p $HOME/.kube

  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

并且可以看到生成的加入集群的验证指令:其他节点执行该指令就可以加入集群

  1. kubeadm join 192.168.235.129:6443 --token ua1abl.k7qs6jsj96grfbto \

  2. --discovery-token-ca-cert-hash sha256:6f8cfd90f464df748d5db56e948860a61b0e727aea1893c72125fea171f84d59

此时master节点并未启动成功,它还需要kubernetes网络的加入

部署flannel网络组件

  1. wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

确保能够访问到quay.io这个registery,通常有墙,如果无法下载可以先下载到本地

网络插件部署完成后查看master节点的核心组件状态:

  1. [root@master ~]# kubectl get pods -n kube-system

  2. NAME READY STATUS RESTARTS AGE

  3. coredns-bccdc95cf-p9frm 1/1 Running 0 114m

  4. coredns-bccdc95cf-xlhkd 1/1 Running 0 114m

  5. etcd-master 1/1 Running 1 113m

  6. kube-apiserver-master 1/1 Running 1 113m

  7. kube-controller-manager-master 1/1 Running 1 113m

  8. kube-flannel-ds-amd64-crrcq 1/1 Running 0 75s

  9. kube-proxy-2cngx 1/1 Running 1 114m

  10. kube-scheduler-master 1/1 Running 1 113m

  11. [root@master ~]# kubectl get nodes

  12. NAME STATUS ROLES AGE VERSION

  13. master Ready master 114m v1.15.0

node节点加入集群

flannel网络插件是运行在集群各节点上的,因此每个节点都要下载该插件,在执行kubectl join时也会去下载该插件,如果无法现在,要提前下载到各个node节点 docker pull quay.io/coreos/flannel:v0.11.0-arm

将各node节点加入集群,使用上面master节点初始化后生成的指令

各node节点执行:

  1. kubeadm join 192.168.235.129:6443 --token ua1abl.k7qs6jsj96grfbto \

  2. --discovery-token-ca-cert-hash sha256:6f8cfd90f464df748d5db56e948860a61b0e727aea1893c72125fea171f84d59

Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 添加完成最后提示这信息说明已经加入了集群

node节点加入集群后,master节点任然需要下载依赖的网络组件,所以仍然需要等待一点时间,取决于网速快慢

  • master节点查看集群状态

  1. [root@master ~]# kubectl get pods -n kube-system

  2. NAME READY STATUS RESTARTS AGE

  3. coredns-bccdc95cf-p9frm 1/1 Running 0 147m

  4. coredns-bccdc95cf-xlhkd 1/1 Running 0 147m

  5. etcd-master 1/1 Running 1 146m

  6. kube-apiserver-master 1/1 Running 1 146m

  7. kube-controller-manager-master 1/1 Running 1 146m

  8. kube-flannel-ds-amd64-bj8gn 1/1 Running 0 10m

  9. kube-flannel-ds-amd64-crrcq 1/1 Running 0 34m

  10. kube-flannel-ds-amd64-qq2fv 1/1 Running 0 9m59s

  11. kube-proxy-2cngx 1/1 Running 1 147m

  12. kube-proxy-dncr6 1/1 Running 0 9m59s

  13. kube-proxy-l2vrt 1/1 Running 0 10m

  14. kube-scheduler-master 1/1 Running 1 146m

  • 查看集群节点

  1. [root@master ~]# kubectl get nodes

  2. NAME STATUS ROLES AGE VERSION

  3. k8s-node-1 Ready <none> 14m v1.15.0

  4. k8s-node-2 Ready <none> 13m v1.15.0

  5. master Ready master 151m v1.15.0

至此,k8s集群部署完成

测试集群

在集群中添加一个pod,看是否正常

  1. [root@master ~]# kubectl create deployment nginx --image=nginx

  2. deployment.apps/nginx created

  3. [root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort

  4. service/nginx exposed

查看pod

  1. [root@master ~]# kubectl get pod,svc

  2. NAME READY STATUS RESTARTS AGE

  3. pod/nginx-554b9c67f9-wvnpr 1/1 Running 0 97s


  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

  5. service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 159m

  6. service/nginx NodePort 10.1.9.142 <none> 80:32632/TCP 48s

可以看到service暴露出来的随机端口是32632,那么访问http://192.168.235.129:32632/即可

使用Deployment创建pod资源

使用Deployment方式创建pod资源,并指定独立的Namespace,和service进行服务暴露,yaml文件内容如下:

  1. apiVersion: v1

  2. kind: Namespace

  3. metadata:

  4. name: nginx

  5. namespace: nginx

  6. labels:

  7. app: nginx


  8. ---


  9. apiVersion: extensions/v1beta1

  10. kind: Deployment

  11. metadata:

  12. name: nginx-server

  13. namespace: nginx

  14. labels:

  15. deploy: nginx

  16. spec:

  17. replicas: 2

  18. selector:

  19. matchLabels:

  20. service: nginx

  21. template:

  22. metadata:

  23. name: nginx-service

  24. labels:

  25. service: nginx

  26. spec:

  27. containers:

  28. - name: nginx-service

  29. image: nginx:1.15

  30. imagePullPolicy: IfNotPresent

  31. ports:

  32. - containerPort: 80

  33. protocol: TCP



  34. ---


  35. apiVersion: v1

  36. kind: Service

  37. metadata:

  38. name: nginx-svc

  39. namespace: nginx

  40. labels:

  41. svc: nginx

  42. spec:

  43. selector:

  44. service: nginx

  45. ports:

  46. - port: 80

  47. targetPort: 80

  48. nodePort: 30001

  49. type: NodePort


文章转载自孤岛鱼夫,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论