写这套专题系列的动机就是想把自己的之前关于k8s的有道笔记内容梳理发出来,在这个技术大爆炸的时代,时刻提醒自己更新自己的技术技能。相信我,这套专题系列跟下来你肯定会有所收获。
准备操作Centos7操作系统3台,安装官方要求是master节点内存需要1500MB,Node节点需要1024MB,本文配置的每台均4C4G.
| 主机名 | 角色 | IP |
| node1 | k8s-master01/Ansible | 192.168.0.114 |
| node2 | k8s-master02 | 192.168.0.115 |
| node3 | k8s-node01 | 192.168.0.116 |
3.我习惯使用Kubeadm部署kubernetes集群,本文使用了开源软件kubespray的kubespray-2.14.0版本。
github地址:https://github.com/kubernetes-sigs/kubespray/releases/tag/v2.14.0。
依赖主要软件版本:
| kubernetes | v1.18.8 |
| nginx | 1.19 |
| coredns | 1.6.7 |
| etcd | v3.4.3 |
| calicoctl | v3.15.2 |
| node | v3.15.2 |
| kube-controllers | v3.15.2 |
| k8s-dns-node-cache | 1.15.13 |
首先登陆到node1上下载kubespray并配置相关部署参数:
[root@node1 ~]# cd opt/[root@node1 opt]# wget -c https://github.com/kubernetes-sigs/kubespray/archive/v2.14.0.tar.g[root@node1 opt]# tar zxf v2.14.0.tar.gz[root@node1 opt]# cd kubespray-2.14.0/#从 ``requirements.txt``安装依赖[root@node1 kubespray-2.14.0]#pip3 install -r requirements.txt#copy ``inventory/sample`` as ``inventory/mycluster``[root@node1 kubespray-2.14.0]#cp -rfp inventory/sample inventory/mycluster#使用清单构建器更新Ansible清单文件,并修改hosts里相关主机的角色,设置三台master和三台etcd[root@node1 kubespray-2.14.0]# declare -a IPS=(192.168.0.114 192.168.0.115 192.168.0.116)CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}###查看并更改下的参数 ``inventory/mycluster/group_vars``[root@node1 kubespray-2.14.0]#cat inventory/mycluster/group_vars/all/all.yml[root@node1 kubespray-2.14.0]#cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml#开始安装集群[root@node1 kubespray-2.14.0]#ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml部署时间跟集群服务器数量有关,我用了12分钟。
[root@node1 ~]# kubectl cluster-infoKubernetes master is running at https://192.168.0.114:6443To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.[root@node1 ~]# kubectl get po -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-7fbcd4c569-9bfwx 1/1 Running 0 1hcalico-node-2cmrq 1/1 Running 0 1hcalico-node-fh6wx 1/1 Running 0 1hcalico-node-mlvth 1/1 Running 0 1hcoredns-66655b745d-ldr4r 1/1 Running 0 1hcoredns-66655b745d-t9z98 1/1 Running 0 1hdns-autoscaler-647cfd7f5b-sfg4w 1/1 Running 0 1hkube-apiserver-node1 1/1 Running 0 1hkube-apiserver-node2 1/1 Running 0 1hkube-controller-manager-node1 1/1 Running 0 1hkube-controller-manager-node2 1/1 Running 0 1hkube-proxy-98q69 1/1 Running 0 1hkube-proxy-bb97m 1/1 Running 0 1hkube-proxy-pgqld 1/1 Running 0 1hkube-scheduler-node1 1/1 Running 0 1hkube-scheduler-node2 1/1 Running 0 1hkubernetes-dashboard-5c8754b9f6-mv8km 1/1 Running 0 1hkubernetes-metrics-scraper-68464b88b5-5hrc5 1/1 Running 0 1hmetrics-server-77c94d4964-ljpz4 1/1 Running 0 1hnginx-proxy-node3 1/1 Running 0 1hnodelocaldns-75nkt 1/1 Running 0 1hnodelocaldns-8gbv8 1/1 Running 0 1hnodelocaldns-bpp2d 1/1 Running 0 1h
查看Node节点情况:
[root@node1 ~]# kubectl get node -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEnode1 Ready master 1h v1.18.8 192.168.0.114 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.13node2 Ready master 1h v1.18.8 192.168.0.115 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.13node3 Ready <none> 1h v1.18.8 192.168.0.116 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.13
查看集群状态:
[root@node1 ~]# kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refusedscheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refusedetcd-1 Healthy {"health":"true"}etcd-0 Healthy {"health":"true"}etcd-2 Healthy {"health":"true"}
[root@node1 manifests]# cd /etc/kubernetes/manifests编辑其中的 kube-controller-manager.yaml和kube-scheduler.yaml文件,把“- --port=0”删掉,重启kubelet即可。
文章转载自运维及时雨,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




