暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

kubernetes配置之十三:kubernetes扩展

运维扫盲人 2020-11-20
403
1、自定义资源CRD
  • CRD(Custom Resource Definitions)是一种集群资源;

  • CRD创建完成后会在API上注册生成GVR类型URL端点,能够作为一种资源类型被使用并实例化成相应的对象;

  • 示例中定义的CRD群组名为auth.ilinux.io;

1.1 自定义一个资源类型并实例化为对象
    [root@k8s-master-01 crd]# cat custom-resource.yaml 
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
    name: users.auth.ilinux.io
    spec:
    group: auth.ilinux.io
    version: v1beta1
    names:
    kind: User
    plural: users
    singular: user
    shortNames:
    - u
    scope: Namespaced
    #定义资源格式验证
    validation:
    openAPIV3Schema:
    properties:
    userID:
    type: interger
    minimum: 1
    maximum: 65535
    groups:
    type: array
    email:
    type: string
    password:
    type: string
    format: password
          required: ["userID","groups"]
      #定义资源查看时能够显示的信息
    additionalPrinterColumns:
    - name: userID
    type: integer
    description: The user ID.
    JSONPath: .spec.userID
    - name: groups
    type: string
    description: The groups of the user.
    JSONPath: .spec.groups
    - name: email
    type: string
    description: The email address of the user.
    JSONPath: .spec.email
    - name: password
    type: string
    description: The password of the user account.
    JSONPath: .spec.password
      #定义资源子资源,用于查看资源对象状态信息,需要与相应的资源控制器配合使用
    subresources:
    status: {}
      #定义资源的多版本支持功能
    versions:
    - name: v1beta1
    served: true
    storage: true
    - name: v1beta2
    served: true
    storage: false
    [root@k8s-master-01 crd]# kubectl get crd
    NAME CREATED AT
    users.auth.ilinux.io 2020-11-18T11:03:43Z
      [root@k8s-master-01 crd]# cat custom-resource-users.yaml
      apiVersion: auth.ilinux.io/v1beta1
      kind: User
      metadata:
      name: admin
      namespace: default
      spec:
      userID: 10
      email: k8s@ilinux.io
      groups:
      - superusers
      - administrator
      password: ikubernets
      [root@k8s-master-01 crd]# kubectl describe user admin
      Name: admin
      Namespace: default
      Labels: <none>
      Annotations: API Version: auth.ilinux.io/v1beta2
      Kind: User
      Metadata:
      Creation Timestamp: 2020-11-18T11:22:06Z
      Generation: 1
      Managed Fields:
      API Version: auth.ilinux.io/v1beta1
      Fields Type: FieldsV1
      fieldsV1:
      f:metadata:
      f:annotations:
      .:
      f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
      .:
      f:email:
      f:groups:
      f:password:
      f:userID:
      Manager: kubectl
      Operation: Update
      Time: 2020-11-18T11:22:06Z
      Resource Version: 23634141
      Self Link: apis/auth.ilinux.io/v1beta2/namespaces/default/users/admin
      UID: 6cde6d30-6f4e-4a74-8108-5f43e31cf8b5
      Spec:
      Email: k8s@ilinux.io
      Groups:
      superusers
      administrator
      Password: ikubernets
      User ID: 10
      Events: <none>

      2、kubernetes集群高可用
      1. kubernetes集群的master节点需要是奇数,以保证等待状态的master节点在半数以上;

      2. kubernetes高可用集群中各组件的实现方法如下:

        • etcd组件自身提供的分布式存储集群为kubernetes集群构建了一个可靠的存储层;

        • 无状态的apiserver运行为多个副本,并在前端使用负载均衡器调度请求;

        • 多副本的controller-manager通过自带的leader选举功能选举出主角色,余下的副本在故障发生时自动启动新一轮的选举;

        • 多副本的scheduler通过自带的leader选举功能选举出主角色,余下的副本在故障发生时自动启动新一轮的选举;

        https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

        2.1 实验环境规划

        Hostname
        IP软件包
        k8s-balance-master/calico-worker01172.17.61.32
        nginx,keepalived

        k8s-balance-backup/

        calico-worker02

        172.17.61.33
        nginx,keepalived
        k8s-master-01
        172.17.61.220kube-apiserver,kube-controller-manager,kube-scheduler,etcd,Flannel
        k8s-master-02172.17.61.221kube-apiserver,kube-controller-manager,kube-scheduler,etcd,Flannel
        k8s-worker-01
        172.17.61.222kubelet,kube-proxy,Docker,etcd,Flannel
        k8s-worker-02172.17.61.223kubelet,kube-proxy,Docker,etcd,Flannel
        2.2 负载均衡部署
          [root@calico-worker01 ~]# yum install -y epel-release nginx keepalived
          [root@calico-worker02 ~]# yum install -y epel-release nginx keepalived
          2.2.1 nginx服务配置
            [root@calico-worker01 ~]# vim etc/nginx/nginx.conf
            stream {
            log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
            access_log var/log/nginx/k8s-access.log main;


            upstream kubernetes-apiserver {
            server 172.17.61.220:6443;
            server 172.17.61.221:6443;
            }
            server {
            listen 6443;
            proxy_pass kubernetes-apiserver;
            }
            }
            [root@calico-worker01 ~]# nginx -t
            nginx: the configuration file etc/nginx/nginx.conf syntax is ok
            nginx: configuration file etc/nginx/nginx.conf test is successful
              [root@calico-worker02 ~]# vim /etc/nginx/nginx.conf
              stream {
              log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
              access_log /var/log/nginx/k8s-access.log main;


              upstream kubernetes-apiserver {
              server 172.17.61.220:6443;
              server 172.17.61.221:6443;
              }
              server {
              listen 6443;
              proxy_pass kubernetes-apiserver;
              }
              }
              [root@calico-worker02 ~]# nginx -t
              nginx: the configuration file etc/nginx/nginx.conf syntax is ok
              nginx: configuration file etc/nginx/nginx.conf test is successful
              2.2.2 keepalived配置
              [1] master
                [root@calico-worker01 ~]# cat  etc/keepalived/keepalived.conf 
                ! Configuration File for keepalived


                global_defs {
                notification_email {
                acassen@firewall.loc
                failover@firewall.loc
                sysadmin@firewall.loc
                }
                notification_email_from Alexandre.Cassen@firewall.loc
                notification_email_from Alexandre.Cassen@firewall.loc
                smtp_server 127.0.0.1
                smtp_connect_timeout 30
                router_id NGINX_MASTER
                vrrp_mcast_group4 224.1.101.33
                }


                vrrp_script check_nginx {
                script "/etc/nginx/check_nginx.sh"
                weight -30
                interval 1
                fall 1
                }


                vrrp_instance VI_1 {
                state MASTER
                interface eth1
                virtual_router_id 51
                priority 100
                advert_int 1
                authentication {
                auth_type PASS
                auth_pass 1111
                }
                virtual_ipaddress {
                172.17.61.29/24
                }
                track_script {
                check_nginx
                }
                }
                [2] slave
                  [root@calico-worker02 ~]# cat  etc/keepalived/keepalived.conf 
                  ! Configuration File for keepalived


                  global_defs {
                  notification_email {
                  acassen@firewall.loc
                  failover@firewall.loc
                  sysadmin@firewall.loc
                  }
                  notification_email_from Alexandre.Cassen@firewall.loc
                  notification_email_from Alexandre.Cassen@firewall.loc
                  smtp_server 127.0.0.1
                  smtp_connect_timeout 30
                  router_id NGINX_MASTER
                  vrrp_mcast_group4 224.1.101.33
                  }


                  vrrp_script check_nginx {
                  script "/etc/nginx/check_nginx.sh"
                  weight -30
                  interval 1
                  fall 1
                  }


                  vrrp_instance VI_1 {
                  state BACKUP
                  interface eth1
                  virtual_router_id 51
                  priority 90
                  advert_int 1
                  authentication {
                  auth_type PASS
                  auth_pass 1111
                  }
                  virtual_ipaddress {
                  172.17.61.29/24
                  }
                  track_script {
                  check_nginx
                  }
                  }
                  2.2.3 定义nginx服务检测脚本
                    [root@calico-worker01 ~]# systemctl start nginx  keepalived
                    [root@calico-worker02 ~]# systemctl start nginx  keepalived
                    [root@calico-worker01 ~]# cat etc/nginx/check_nginx.sh
                    count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
                    if [ "$count" -eq 0 ];then
                    exit 1
                    else
                    exit 0
                    fi
                    [root@calico-worker02 ~]# cat  /etc/nginx/check_nginx.sh 
                    count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
                    if [ "$count" -eq 0 ];then
                    exit 1
                    else
                    exit 0
                    fi
                    [root@calico-worker01 ~]# chmod +x etc/nginx/check_nginx.sh
                    [root@calico-worker02 ~]# chmod +x /etc/nginx/check_nginx.sh

                      [root@calico-worker01 ~]# ip a
                      eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
                      link/ether 52:54:00:e0:3f:83 brd ff:ff:ff:ff:ff:ff
                      inet 172.17.61.32/24 brd 172.17.61.255 scope global eth1
                      valid_lft forever preferred_lft forever
                      inet 172.17.61.29/24 scope global secondary eth1
                      valid_lft forever preferred_lft forever
                      [root@calico-worker01 ~]# for process in `ps -ef | grep nginx | awk -F' ' '{print $2}'`;do kill -9 $process; done
                      [root@calico-worker01 ~]# systemctl status keepalived
                      Nov 19 12:56:58 calico-worker01 Keepalived_vrrp[19742]: VRRP_Instance(VI_1) Received advert with higher priority 90, ours 70
                      Nov 19 12:56:58 calico-worker01 Keepalived_vrrp[19742]: VRRP_Instance(VI_1) Entering BACKUP STATE
                      Nov 19 12:56:58 calico-worker01 Keepalived_vrrp[19742]: VRRP_Instance(VI_1) removing protocol VIPs.

                      2.3 部署双Master
                      2.3.1 配置Master-01
                      [1] 拉取必要的镜像(master-01、master-02)
                        [root@k8s-master-01 ~]# cat k8s-image-pull.sh 
                        #!/bin/bash
                        # Script For Quick Pull K8S Docker Images
                        KUBE_VERSION=v1.19.4
                        PAUSE_VERSION=3.2
                        CORE_DNS_VERSION=1.7.0
                        ETCD_VERSION=3.4.13-0
                        # pull kubernetes images from hub.docker.com
                        docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
                        docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
                        docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
                        docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
                        # pull aliyuncs mirror docker images
                        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
                        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
                        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
                        # retag to k8s.gcr.io prefix
                        docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
                        docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
                        docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
                        docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
                        docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
                        docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
                        docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
                        # untag origin tag, the images won't be delete.
                        docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
                        docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
                        docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
                        docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
                        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
                        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
                        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
                        [2] 查看镜像下载情况 
                          [root@k8s-master-01 ~]# docker image ls
                          REPOSITORY TAG IMAGE ID CREATED SIZE
                          k8s.gcr.io/kube-proxy v1.19.4 635b36f4d89f 8 days ago 118MB
                          k8s.gcr.io/kube-controller-manager v1.19.4 4830ab618586 8 days ago 111MB
                          k8s.gcr.io/kube-apiserver v1.19.4 b15c6247777d 8 days ago 119MB
                          k8s.gcr.io/kube-scheduler v1.19.4 14cd22f7abe7 8 days ago 45.7MB
                          quay.io/coreos/flannel v0.13.0 e708f4bb69e3 5 weeks ago 57.2MB
                          k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 2 months ago 253MB
                          k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 5 months ago 45.2MB
                          k8s.gcr.io/pause 3.2 80d28bedfe5d 9 months ago 683kB
                          [3] 安装必要的程序(master-01、master-02) 
                            [root@k8s-master-01 ~]# sh k8s-image-pull.sh
                            [root@k8s-master-01 ~]# yum install -y kubelet kubeadm kubectl docker-ce
                            [root@k8s-master-01 ~]# systemctl enable kubelet
                            [root@k8s-master-01 ~]# systemctl start docker-ce && systemctl enable docker-ce
                            [root@k8s-master-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
                            [root@k8s-master-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
                            [4] 初始化master-01
                              [root@k8s-master-01 ~]# kubeadm init --control-plane-endpoint "172.17.61.29:6443" --upload-certs --kubernetes-version=v1.19.4  --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --apiserver-advertise-address=0.0.0.0 --ignore-preflight-errors=all
                              W1120 16:22:19.818127 24397 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
                              [init] Using Kubernetes version: v1.19.4
                              [preflight] Running pre-flight checks
                              [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
                              [preflight] Pulling images required for setting up a Kubernetes cluster
                              [preflight] This might take a minute or two, depending on the speed of your internet connection
                              [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
                              [certs] Using certificateDir folder "/etc/kubernetes/pki"
                              [certs] Generating "ca" certificate and key
                              [certs] Generating "apiserver" certificate and key
                              [certs] apiserver serving cert is signed for DNS names [k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.120.220 172.17.61.29]
                              [certs] Generating "apiserver-kubelet-client" certificate and key
                              [certs] Generating "front-proxy-ca" certificate and key
                              [certs] Generating "front-proxy-client" certificate and key
                              [certs] Generating "etcd/ca" certificate and key
                              [certs] Generating "etcd/server" certificate and key
                              [certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.1.120.220 127.0.0.1 ::1]
                              [certs] Generating "etcd/peer" certificate and key
                              [certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.1.120.220 127.0.0.1 ::1]
                              [certs] Generating "etcd/healthcheck-client" certificate and key
                              [certs] Generating "apiserver-etcd-client" certificate and key
                              [certs] Generating "sa" key and public key
                              [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
                              [kubeconfig] Writing "admin.conf" kubeconfig file
                              [kubeconfig] Writing "kubelet.conf" kubeconfig file
                              [kubeconfig] Writing "controller-manager.conf" kubeconfig file
                              [kubeconfig] Writing "scheduler.conf" kubeconfig file
                              [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                              [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                              [kubelet-start] Starting the kubelet
                              [control-plane] Using manifest folder "/etc/kubernetes/manifests"
                              [control-plane] Creating static Pod manifest for "kube-apiserver"
                              [control-plane] Creating static Pod manifest for "kube-controller-manager"
                              [control-plane] Creating static Pod manifest for "kube-scheduler"
                              [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
                              [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
                              [apiclient] All control plane components are healthy after 15.031798 seconds
                              [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
                              [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
                              [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
                              [upload-certs] Using certificate key:
                              c72de7b1a6940f34ec65608b4ea8d9324dc7a2567bdb671517fd153264b4b16c
                              [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
                              [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
                              [bootstrap-token] Using token: v56gku.iwaurdlu8uzzsolq
                              [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
                              [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
                              [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
                              [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
                              [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
                              [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
                              [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
                              [addons] Applied essential addon: CoreDNS
                              [addons] Applied essential addon: kube-proxy


                              Your Kubernetes control-plane has initialized successfully!


                              To start using your cluster, you need to run the following as a regular user:


                              mkdir -p $HOME/.kube
                              sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
                              sudo chown $(id -u):$(id -g) $HOME/.kube/config


                              You should now deploy a pod network to the cluster.
                              Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
                              https://kubernetes.io/docs/concepts/cluster-administration/addons/


                              You can now join any number of the control-plane node running the following command on each as root:


                              kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \
                              --discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582 \
                              --control-plane --certificate-key c72de7b1a6940f34ec65608b4ea8d9324dc7a2567bdb671517fd153264b4b16c


                              Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
                              As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
                              "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.


                              Then you can join any number of worker nodes by running the following on each as root:


                              kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \
                              --discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582
                              注意:初始化master-01时,--control-plane-endpoint是必须参数,用来指出Load Balance VIP,--pod-network-cidr的值必须与kube-flannel的配置文件中的值一致,--upload-certs表明利用master-01来上传证书密钥,而不是手动共享下载证书文件。(也可以使用scp在各master节点之间发送证书文件到对用目录)
                              [5] 配置管理员认证文件(master-01) 
                                [root@k8s-master-01 ~]# mkdir -p $HOME/.kube
                                [root@k8s-master-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
                                [root@k8s-master-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
                                [6] 生成供master-02加入集群的证书(master-01) 
                                  [root@k8s-master-01 ~]# kubeadm init phase upload-certs --upload-certs
                                  W1120 16:32:27.805797 30237 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
                                  [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
                                  [upload-certs] Using certificate key:
                                  a26b32007c89b810d964adbfb6cfab61a886f42f7733cd78d63430b1555e31da
                                  注意:当初始化master-01使用选项--upload-certs时,就必须使用kubeadm init phase upload-certs来生成证书密钥供master-02加入集群;
                                  [7] 部署flannel网络插件(master-01) 
                                    [root@k8s-master-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
                                    podsecuritypolicy.policy/psp.flannel.unprivileged created
                                    clusterrole.rbac.authorization.k8s.io/flannel created
                                    clusterrolebinding.rbac.authorization.k8s.io/flannel created
                                    serviceaccount/flannel created
                                    configmap/kube-flannel-cfg created
                                    daemonset.apps/kube-flannel-ds created
                                    注意:flannel插件只在master-01节点部署即可。
                                    2.3.2 配置Master-02
                                    [1] 安装必要的程序(master-02) 
                                      [root@k8s-master-02 ~]# sh k8s-image-pull.sh
                                      [root@k8s-master-02 ~]# yum install -y kubelet kubeadm kubectl docker-ce
                                      [root@k8s-master-02 ~]# systemctl enable kubelet
                                      [root@k8s-master-02 ~]# systemctl start docker-ce && systemctl enable docker-ce
                                      [root@k8s-master-02 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
                                      [2] 加入集群(master-02) 
                                        [root@k8s-master-02 ~]# kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \
                                             --discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582 \
                                             --control-plane --certificate-key a26b32007c89b810d964adbfb6cfab61a886f42f7733cd78d63430b1555e31da
                                        [preflight] Running pre-flight checks
                                        [WARNING Hostname]: hostname "k8s-master-02" could not be reached
                                        [WARNING Hostname]: hostname "k8s-master-02": lookup k8s-master-02 on 114.114.114.114:53: no such host
                                        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
                                        [preflight] Reading configuration from the cluster...
                                        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
                                        [preflight] Running pre-flight checks before initializing the new control plane instance
                                        [preflight] Pulling images required for setting up a Kubernetes cluster
                                        [preflight] This might take a minute or two, depending on the speed of your internet connection
                                        [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
                                        [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
                                        [certs] Using certificateDir folder "/etc/kubernetes/pki"
                                        [certs] Generating "etcd/peer" certificate and key
                                        [certs] etcd/peer serving cert is signed for DNS names [k8s-master-02 localhost] and IPs [10.1.120.221 127.0.0.1 ::1]
                                        [certs] Generating "etcd/server" certificate and key
                                        [certs] etcd/server serving cert is signed for DNS names [k8s-master-02 localhost] and IPs [10.1.120.221 127.0.0.1 ::1]
                                        [certs] Generating "etcd/healthcheck-client" certificate and key
                                        [certs] Generating "apiserver-etcd-client" certificate and key
                                        [certs] Generating "apiserver" certificate and key
                                        [certs] apiserver serving cert is signed for DNS names [k8s-master-02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.120.221 172.17.61.29]
                                        [certs] Generating "apiserver-kubelet-client" certificate and key
                                        [certs] Generating "front-proxy-client" certificate and key
                                        [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
                                        [certs] Using the existing "sa" key
                                        [kubeconfig] Generating kubeconfig files
                                        [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
                                        [kubeconfig] Writing "admin.conf" kubeconfig file
                                        [kubeconfig] Writing "controller-manager.conf" kubeconfig file
                                        [kubeconfig] Writing "scheduler.conf" kubeconfig file
                                        [control-plane] Using manifest folder "/etc/kubernetes/manifests"
                                        [control-plane] Creating static Pod manifest for "kube-apiserver"
                                        [control-plane] Creating static Pod manifest for "kube-controller-manager"
                                        [control-plane] Creating static Pod manifest for "kube-scheduler"
                                        [check-etcd] Checking that the etcd cluster is healthy
                                        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                                        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                                        [kubelet-start] Starting the kubelet
                                        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
                                        [etcd] Announced new etcd member joining to the existing etcd cluster
                                        [etcd] Creating static Pod manifest for "etcd"
                                        [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
                                        [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
                                        [mark-control-plane] Marking the node k8s-master-02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
                                        [mark-control-plane] Marking the node k8s-master-02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]


                                        This node has joined the cluster and a new control plane instance was created:


                                        * Certificate signing request was sent to apiserver and approval was received.
                                        * The Kubelet was informed of the new secure connection details.
                                        * Control plane (master) label and taint were applied to the new node.
                                        * The Kubernetes control plane instances scaled up.
                                        * A new etcd member was added to the local/stacked etcd cluster.


                                        To start administering your cluster from this node, you need to run the following as a regular user:


                                        mkdir -p $HOME/.kube
                                        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
                                        sudo chown $(id -u):$(id -g) $HOME/.kube/config


                                        Run 'kubectl get nodes' to see this node join the cluster.


                                        注意:加入集群的token需要将--control-plane --certificate-key的值替换为在master-01使用kubeadm init phase upload-certs生成的值
                                        [3] 配置管理员认证文件(master-02) 
                                          [root@k8s-master-02 ~]# mkdir -p $HOME/.kube
                                          [root@k8s-master-02 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
                                          [root@k8s-master-02 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
                                          2.3.3 配置worker节点
                                          [1] 安装必要程序(worker-01、worker-02) 
                                            [root@k8s-worker-01 ~]# yum  install -y kubelet kubeadm docker-ce
                                            [root@k8s-worker-02 ~]# systemctl enable kubelet
                                            [root@k8s-worker-01 ~]# systemctl start docker-ce && systemctl enable docker-ce
                                            [root@k8s-worker-01 ~]# iptables -t filter -F && iptables -t mangle -F && iptables -t raw -F && iptables -t nat -F && iptables -X
                                            [root@k8s-worker-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
                                            [2] 加入集群(worker-01、worker-02) 
                                              [root@k8s-worker-01 ~]# kubeadm join 172.17.61.29:6443 --token v56gku.iwaurdlu8uzzsolq \
                                                   --discovery-token-ca-cert-hash sha256:9995b90f733683322c6c81cb33cefd7e8d2f31c5593158a56f3951e3ccd36582
                                              [preflight] Running pre-flight checks
                                              [preflight] Reading configuration from the cluster...
                                              [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
                                              [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                                              [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                                              [kubelet-start] Starting the kubelet
                                              [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


                                              This node has joined the cluster:
                                              * Certificate signing request was sent to apiserver and a response was received.
                                              * The Kubelet was informed of the new secure connection details.


                                              Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


                                              2.3.4 测试
                                              [1] 查看集群版本(master-01/master-02) 
                                                [root@k8s-master-02 ~]# kubectl version --short=true
                                                Client Version: v1.19.4
                                                Server Version: v1.19.4
                                                [2] 查看集群node状态(master-01/master-02) 
                                                  [root@k8s-master-01 ~]# kubectl get nodes -o wide
                                                  NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
                                                  k8s-master-01 Ready master 28m v1.19.4 172.17.61.220 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13
                                                  k8s-master-02 Ready master 15m v1.19.4 10.1.120.221 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13
                                                  k8s-worker-01 Ready <none> 19m v1.19.4 172.17.61.222 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13
                                                  k8s-worker-02 Ready <none> 12m v1.19.4 172.17.61.223 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://19.3.13
                                                  [3] 查看集群各系统组件的Pod对象(master-01/master-02) 
                                                    [root@k8s-master-01 ~]# kubectl get pods -n kube-system -o wide
                                                    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
                                                    coredns-f9fd979d6-22fxq 1/1 Running 0 27m 10.244.0.3 k8s-master-01 <none> <none>
                                                    coredns-f9fd979d6-f4ltq 1/1 Running 0 27m 10.244.0.2 k8s-master-01 <none> <none>
                                                    etcd-k8s-master-01 1/1 Running 0 27m 172.17.61.220 k8s-master-01 <none> <none>
                                                    etcd-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>
                                                    kube-apiserver-k8s-master-01 1/1 Running 0 27m 172.17.61.220 k8s-master-01 <none> <none>
                                                    kube-apiserver-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>
                                                    kube-controller-manager-k8s-master-01 1/1 Running 1 27m 172.17.61.220 k8s-master-01 <none> <none>
                                                    kube-controller-manager-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>
                                                    kube-flannel-ds-7thhj 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>
                                                    kube-flannel-ds-l667g 1/1 Running 0 12m 172.17.61.223 k8s-worker-02 <none> <none>
                                                    kube-flannel-ds-px579 1/1 Running 0 18m 172.17.61.222 k8s-worker-01 <none> <none>
                                                    kube-flannel-ds-rfzlw 1/1 Running 0 19m 172.17.61.220 k8s-master-01 <none> <none>
                                                    kube-proxy-7hw2j 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>
                                                    kube-proxy-7l4px 1/1 Running 0 27m 172.17.61.220 k8s-master-01 <none> <none>
                                                    kube-proxy-lt98v 1/1 Running 0 12m 172.17.61.223 k8s-worker-02 <none> <none>
                                                    kube-proxy-xqpw9 1/1 Running 0 18m 172.17.61.222 k8s-worker-01 <none> <none>
                                                    kube-scheduler-k8s-master-01 1/1 Running 1 27m 172.17.61.220 k8s-master-01 <none> <none>
                                                    kube-scheduler-k8s-master-02 1/1 Running 0 14m 10.1.120.221 k8s-master-02 <none> <none>
                                                    [4] 测试创建Pod对象(master-01/master-02) 
                                                      [root@k8s-master-01 ~]# kubectl apply -f myapp-deploy.yaml 
                                                      [root@k8s-master-01 ~]# kubectl get pod
                                                      NAME READY STATUS RESTARTS AGE
                                                      mydeployment-7ffb5fd5ff-82jq5 1/1 Running 0 55s
                                                      mydeployment-7ffb5fd5ff-cvzs2 1/1 Running 0 55s
                                                      mydeployment-7ffb5fd5ff-xsm9r 1/1 Running 0 55s
                                                      [5] 查看集群kube-controller-manager组件(master-01/master-02) 
                                                        [root@k8s-master-02 ~]# kubectl get endpoints -n kube-system
                                                        NAME ENDPOINTS AGE
                                                        kube-controller-manager <none> 164m
                                                        kube-dns 10.244.0.4:53,10.244.0.5:53,10.244.0.4:53 + 3 more... 164m
                                                        kube-scheduler <none> 164m
                                                        [root@k8s-master-02 ~]# kubectl describe endpoints kube-controller-manager -n kube-system
                                                        Name: kube-controller-manager
                                                        Namespace: kube-system
                                                        Labels: <none>
                                                        Annotations: control-plane.alpha.kubernetes.io/leader:
                                                        {"holderIdentity":"k8s-master-01_dcab4a77-811e-40ee-b3b0-d02a0c499f10","leaseDurationSeconds":15,"acquireTime":"2020-11-20T11:06:42Z","ren...
                                                        Subsets:
                                                        Events:
                                                        Type Reason Age From Message
                                                        ---- ------ ---- ---- -------
                                                          Normal  LeaderElection  39s   kube-controller-manager  k8s-master-01_dcab4a77-811e-40ee-b3b0-d02a0c499f10 became leader
                                                        tips:"kube-controller-manager  k8s-master-01_dcab4a77-811e-40ee-b3b0-d02a0c499f10 became leader"表明当前k8s-master-01为leader。
                                                        文章转载自运维扫盲人,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

                                                        评论