CRD(Custom Resource Definitions)是一种集群资源;
CRD创建完成后会在API上注册生成GVR类型URL端点,能够作为一种资源类型被使用并实例化成相应的对象;
示例中定义的CRD群组名为auth.ilinux.io;
[root@k8s-master-01 crd]# cat custom-resource.yamlapiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: users.auth.ilinux.iospec:group: auth.ilinux.ioversion: v1beta1names:kind: Userplural: userssingular: usershortNames:- uscope: Namespaced#定义资源格式验证validation:openAPIV3Schema:properties:userID:type: intergerminimum: 1maximum: 65535groups:type: arrayemail:type: stringpassword:type: stringformat: passwordrequired: ["userID","groups"]#定义资源查看时能够显示的信息additionalPrinterColumns:- name: userIDtype: integerdescription: The user ID.JSONPath: .spec.userID- name: groupstype: stringdescription: The groups of the user.JSONPath: .spec.groups- name: emailtype: stringdescription: The email address of the user.JSONPath: .spec.email- name: passwordtype: stringdescription: The password of the user account.JSONPath: .spec.password#定义资源子资源,用于查看资源对象状态信息,需要与相应的资源控制器配合使用subresources:status: {}#定义资源的多版本支持功能versions:- name: v1beta1served: truestorage: true- name: v1beta2served: truestorage: false[root@k8s-master-01 crd]# kubectl get crdNAME CREATED ATusers.auth.ilinux.io 2020-11-18T11:03:43Z
[root@k8s-master-01 crd]# cat custom-resource-users.yamlapiVersion: auth.ilinux.io/v1beta1kind: Usermetadata:name: adminnamespace: defaultspec:userID: 10email: k8s@ilinux.iogroups:- superusers- administratorpassword: ikubernets[root@k8s-master-01 crd]# kubectl describe user adminName: adminNamespace: defaultLabels: <none>Annotations: API Version: auth.ilinux.io/v1beta2Kind: UserMetadata:Creation Timestamp: 2020-11-18T11:22:06ZGeneration: 1Managed Fields:API Version: auth.ilinux.io/v1beta1Fields Type: FieldsV1fieldsV1:f:metadata:f:annotations:.:f:kubectl.kubernetes.io/last-applied-configuration:f:spec:.:f:email:f:groups:f:password:f:userID:Manager: kubectlOperation: UpdateTime: 2020-11-18T11:22:06ZResource Version: 23634141Self Link: apis/auth.ilinux.io/v1beta2/namespaces/default/users/adminUID: 6cde6d30-6f4e-4a74-8108-5f43e31cf8b5Spec:Email: k8s@ilinux.ioGroups:superusersadministratorPassword: ikubernetsUser ID: 10Events: <none>

kubernetes集群的master节点需要是奇数,以保证等待状态的master节点在半数以上;
kubernetes高可用集群中各组件的实现方法如下:
etcd组件自身提供的分布式存储集群为kubernetes集群构建了一个可靠的存储层;
无状态的apiserver运行为多个副本,并在前端使用负载均衡器调度请求;
多副本的controller-manager通过自带的leader选举功能选举出主角色,余下的副本在故障发生时自动启动新一轮的选举;
多副本的scheduler通过自带的leader选举功能选举出主角色,余下的副本在故障发生时自动启动新一轮的选举;
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
2.1 实验环境规划
| Hostname | IP | 软件包 |
| k8s-balance-master/calico-worker01 | 172.17.61.32 | |
k8s-balance-backup/ calico-worker02 | 172.17.61.33 | |
| k8s-master-01 | 172.17.61.220 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,Flannel |
| k8s-master-02 | 172.17.61.221 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd,Flannel |
| k8s-worker-01 | 172.17.61.222 | kubelet,kube-proxy,Docker,etcd,Flannel |
| k8s-worker-02 | 172.17.61.223 | kubelet,kube-proxy,Docker,etcd,Flannel |
[root@calico-worker01 ~]# yum install -y epel-release nginx keepalived[root@calico-worker02 ~]# yum install -y epel-release nginx keepalived
[root@calico-worker01 ~]# vim etc/nginx/nginx.confstream {log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log var/log/nginx/k8s-access.log main;upstream kubernetes-apiserver {server 172.17.61.220:6443;server 172.17.61.221:6443;}server {listen 6443;proxy_pass kubernetes-apiserver;}}[root@calico-worker01 ~]# nginx -tnginx: the configuration file etc/nginx/nginx.conf syntax is oknginx: configuration file etc/nginx/nginx.conf test is successful
[root@calico-worker02 ~]# vim /etc/nginx/nginx.confstream {log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log /var/log/nginx/k8s-access.log main;upstream kubernetes-apiserver {server 172.17.61.220:6443;server 172.17.61.221:6443;}server {listen 6443;proxy_pass kubernetes-apiserver;}}[root@calico-worker02 ~]# nginx -tnginx: the configuration file etc/nginx/nginx.conf syntax is oknginx: configuration file etc/nginx/nginx.conf test is successful
[root@calico-worker01 ~]# cat etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locnotification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id NGINX_MASTERvrrp_mcast_group4 224.1.101.33}vrrp_script check_nginx {script "/etc/nginx/check_nginx.sh"weight -30interval 1fall 1}vrrp_instance VI_1 {state MASTERinterface eth1virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {172.17.61.29/24}track_script {check_nginx}}
[root@calico-worker02 ~]# cat etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locnotification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id NGINX_MASTERvrrp_mcast_group4 224.1.101.33}vrrp_script check_nginx {script "/etc/nginx/check_nginx.sh"weight -30interval 1fall 1}vrrp_instance VI_1 {state BACKUPinterface eth1virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {172.17.61.29/24}track_script {check_nginx}}
[root@calico-worker01 ~]# systemctl start nginx keepalived[root@calico-worker02 ~]# systemctl start nginx keepalived[root@calico-worker01 ~]# cat etc/nginx/check_nginx.shcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];thenexit 1elseexit 0fi[root@calico-worker02 ~]# cat /etc/nginx/check_nginx.shcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];thenexit 1elseexit 0fi[root@calico-worker01 ~]# chmod +x etc/nginx/check_nginx.sh[root@calico-worker02 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@calico-worker01 ~]# ip aeth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 52:54:00:e0:3f:83 brd ff:ff:ff:ff:ff:ffinet 172.17.61.32/24 brd 172.17.61.255 scope global eth1valid_lft forever preferred_lft foreverinet 172.17.61.29/24 scope global secondary eth1valid_lft forever preferred_lft forever[root@calico-worker01 ~]# for process in `ps -ef | grep nginx | awk -F' ' '{print $2}'`;do kill -9 $process; done[root@calico-worker01 ~]# systemctl status keepalivedNov 19 12:56:58 calico-worker01 Keepalived_vrrp[19742]: VRRP_Instance(VI_1) Received advert with higher priority 90, ours 70Nov 19 12:56:58 calico-worker01 Keepalived_vrrp[19742]: VRRP_Instance(VI_1) Entering BACKUP STATENov 19 12:56:58 calico-worker01 Keepalived_vrrp[19742]: VRRP_Instance(VI_1) removing protocol VIPs.
[root@k8s-master-01 ~]# cat k8s-image-pull.sh#!/bin/bash# Script For Quick Pull K8S Docker ImagesKUBE_VERSION=v1.19.4PAUSE_VERSION=3.2CORE_DNS_VERSION=1.7.0ETCD_VERSION=3.4.13-0# pull kubernetes images from hub.docker.comdocker pull kubeimage/kube-proxy-amd64:$KUBE_VERSIONdocker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSIONdocker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSIONdocker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION# pull aliyuncs mirror docker imagesdocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSIONdocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSIONdocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION# retag to k8s.gcr.io prefixdocker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSIONdocker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSIONdocker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSIONdocker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSIONdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSIONdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSIONdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION# untag origin tag, the images won't be delete.docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSIONdocker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSIONdocker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSIONdocker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSIONdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSIONdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSIONdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
[root@k8s-master-01 ~]# docker image lsREPOSITORY TAG IMAGE ID CREATED SIZEk8s.gcr.io/kube-proxy v1.19.4 635b36f4d89f 8 days ago 118MBk8s.gcr.io/kube-controller-manager v1.19.4 4830ab618586 8 days ago 111MBk8s.gcr.io/kube-apiserver v1.19.4 b15c6247777d 8 days ago 119MBk8s.gcr.io/kube-scheduler v1.19.4 14cd22f7abe7 8 days ago 45.7MBquay.io/coreos/flannel v0.13.0 e708f4bb69e3 5 weeks ago 57.2MBk8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 2 months ago 253MBk8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 5 months ago 45.2MBk8s.gcr.io/pause 3.2 80d28bedfe5d 9 months ago 683kB
[root@k8s-master-01 ~]# sh k8s-image-pull.sh[root@k8s-master-01 ~]# yum install -y kubelet kubeadm kubectl docker-ce[root@k8s-master-01 ~]# systemctl enable kubelet[root@k8s-master-01 ~]# systemctl start docker-ce && systemctl enable docker-ce[root@k8s-master-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables[root@k8s-master-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-master-01 ~]# kubeadm init --control-plane-endpoint "172.17.61.29:6443" --upload-certs --kubernetes-version=v1.19.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=0.0.0.0 --ignore-preflight-errors=allW1120 16:22:19.818127 24397 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.19.4[preflight] Running pre-flight checks[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.120.220 172.17.61.29][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.1.120.220 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.1.120.220 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 15.031798 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:c72de7b1a6940f34ec65608b4ea8d9324dc7a2567bdb671517fd153264b4b16c[mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: v56gku.iwaurdlu8uzzsolq[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root:kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \--discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582 \--control-plane --certificate-key c72de7b1a6940f34ec65608b4ea8d9324dc7a2567bdb671517fd153264b4b16cPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \--discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582
[root@k8s-master-01 ~]# mkdir -p $HOME/.kube[root@k8s-master-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master-01 ~]# kubeadm init phase upload-certs --upload-certsW1120 16:32:27.805797 30237 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:a26b32007c89b810d964adbfb6cfab61a886f42f7733cd78d63430b1555e31da
[root@k8s-master-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created
[root@k8s-master-02 ~]# sh k8s-image-pull.sh[root@k8s-master-02 ~]# yum install -y kubelet kubeadm kubectl docker-ce[root@k8s-master-02 ~]# systemctl enable kubelet[root@k8s-master-02 ~]# systemctl start docker-ce && systemctl enable docker-ce[root@k8s-master-02 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-master-02 ~]# kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \--discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582 \--control-plane --certificate-key a26b32007c89b810d964adbfb6cfab61a886f42f7733cd78d63430b1555e31da[preflight] Running pre-flight checks[WARNING Hostname]: hostname "k8s-master-02" could not be reached[WARNING Hostname]: hostname "k8s-master-02": lookup k8s-master-02 on 114.114.114.114:53: no such host[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[preflight] Running pre-flight checks before initializing the new control plane instance[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master-02 localhost] and IPs [10.1.120.221 127.0.0.1 ::1][certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master-02 localhost] and IPs [10.1.120.221 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master-02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.120.221 172.17.61.29][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"[certs] Using the existing "sa" key[kubeconfig] Generating kubeconfig files[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[check-etcd] Checking that the etcd cluster is healthy[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...[etcd] Announced new etcd member joining to the existing etcd cluster[etcd] Creating static Pod manifest for "etcd"[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[mark-control-plane] Marking the node k8s-master-02 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master-02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.* The Kubelet was informed of the new secure connection details.* Control plane (master) label and taint were applied to the new node.* The Kubernetes control plane instances scaled up.* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.
[root@k8s-master-02 ~]# mkdir -p $HOME/.kube[root@k8s-master-02 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master-02 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-worker-01 ~]# yum install -y kubelet kubeadm docker-ce[root@k8s-worker-02 ~]# systemctl enable kubelet[root@k8s-worker-01 ~]# systemctl start docker-ce && systemctl enable docker-ce[root@k8s-worker-01 ~]# iptables -t filter -F && iptables -t mangle -F && iptables -t raw -F && iptables -t nat -F && iptables -X[root@k8s-worker-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-worker-01 ~]# kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \--discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master-02 ~]# kubectl version --short=trueClient Version: v1.19.4Server Version: v1.19.4
[root@k8s-master-01 ~]# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEk8s-master-01 Ready master 28m v1.19.4 172.17.61.220 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13k8s-master-02 Ready master 15m v1.19.4 10.1.120.221 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13k8s-worker-01 Ready <none> 19m v1.19.4 172.17.61.222 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13k8s-worker-02 Ready <none> 12m v1.19.4 172.17.61.223 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13
[root@k8s-master-01 ~]# kubectl get pods -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScoredns-f9fd979d6-22fxq 1/1 Running 0 27m 10.244.0.3 k8s-master-01 <none> <none>coredns-f9fd979d6-f4ltq 1/1 Running 0 27m 10.244.0.2 k8s-master-01 <none> <none>etcd-k8s-master-01 1/1 Running 0 27m 172.17.61.220 k8s-master-01 <none> <none>etcd-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>kube-apiserver-k8s-master-01 1/1 Running 0 27m 172.17.61.220 k8s-master-01 <none> <none>kube-apiserver-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>kube-controller-manager-k8s-master-01 1/1 Running 1 27m 172.17.61.220 k8s-master-01 <none> <none>kube-controller-manager-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>kube-flannel-ds-7thhj 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>kube-flannel-ds-l667g 1/1 Running 0 12m 172.17.61.223 k8s-worker-02 <none> <none>kube-flannel-ds-px579 1/1 Running 0 18m 172.17.61.222 k8s-worker-01 <none> <none>kube-flannel-ds-rfzlw 1/1 Running 0 19m 172.17.61.220 k8s-master-01 <none> <none>kube-proxy-7hw2j 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>kube-proxy-7l4px 1/1 Running 0 27m 172.17.61.220 k8s-master-01 <none> <none>kube-proxy-lt98v 1/1 Running 0 12m 172.17.61.223 k8s-worker-02 <none> <none>kube-proxy-xqpw9 1/1 Running 0 18m 172.17.61.222 k8s-worker-01 <none> <none>kube-scheduler-k8s-master-01 1/1 Running 1 27m 172.17.61.220 k8s-master-01 <none> <none>kube-scheduler-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>
[root@k8s-master-01 ~]# kubectl apply -f myapp-deploy.yaml[root@k8s-master-01 ~]# kubectl get podNAME READY STATUS RESTARTS AGEmydeployment-7ffb5fd5ff-82jq5 1/1 Running 0 55smydeployment-7ffb5fd5ff-cvzs2 1/1 Running 0 55smydeployment-7ffb5fd5ff-xsm9r 1/1 Running 0 55s
[root@k8s-master-02 ~]# kubectl get endpoints -n kube-systemNAME ENDPOINTS AGEkube-controller-manager <none> 164mkube-dns 10.244.0.4:53,10.244.0.5:53,10.244.0.4:53 + 3 more... 164mkube-scheduler <none> 164m[root@k8s-master-02 ~]# kubectl describe endpoints kube-controller-manager -n kube-systemName: kube-controller-managerNamespace: kube-systemLabels: <none>Annotations: control-plane.alpha.kubernetes.io/leader:{"holderIdentity":"k8s-master-01_dcab4a77-811e-40ee-b3b0-d02a0c499f10","leaseDurationSeconds":15,"acquireTime":"2020-11-20T11:06:42Z","ren...Subsets:Events:Type Reason Age From Message---- ------ ---- ---- -------Normal LeaderElection 39s kube-controller-manager k8s-master-01_dcab4a77-811e-40ee-b3b0-d02a0c499f10 became leader
文章转载自运维扫盲人,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




