暂无图片
暂无图片
1
暂无图片
暂无图片
暂无图片

Ubuntu22.04上安装K8s1.24.2--3节点环境安装(一主两从)

rundba 2022-07-26
12081

在Ubuntu22.04上安装K8s1.24.2,采用一主两从方式。

 

0. ENV


 

0.1 软件版本

  • Ubuntu server 22.04 LTS;

  • containerd 1.6.6-1;

  • calico 3.18;

  • kubernetes v1.24.2-0(kubelet kubeadm kubectl)。


1个master 2个work:

  • master1       192.168.80.125

  • node1 192.168.80.126

  • node2 192.168.80.127


0.2 集群类型

● Kubernetes集群大致分为两类:一主多从和多主多从。

● 一主多从:一个Master节点和多台Node节点,搭建简单,但是有单机故障风险,适合用于测试环境(本次采用方式)。

● 多主多从:多台Master和多台Node节点,搭建麻烦,安全性高,适合用于生产环境(后续)。


0.3 安装方式

kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包。

  • minikube:一个用于快速搭建单节点的kubernetes工具。

  • kubeadm:一个用于快速搭建kubernetes集群的工具。

  • 二进制包:从官网上下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效。


除官方工具kubespray 、kops外,还可选择众多的第三方工具的安装,如rancher(SUSE)、RKE(SUSE)、kubesphere、kuboard、灵雀云、MicroK8s(Ubuntu)、Charmed Kubernetes(Ubuntu)等。

本次选择kubeadm方式。


0.4 主机规划

角色IP地址操作系统配置
master1192.168.80.125Ubuntu server 22.04 LTS2CPU2G内存,50G硬盘
node1192.168.80.126Ubuntu server 22.04 LTS2CPU2G内存,50G硬盘
node2192.168.80.127Ubuntu server 22.04 LTS2CPU2G内存,50G硬盘


0.5 环境说明

当前作为实验学习环境,仅1台master,生产环境建议多主部署。

● 本次环境搭建需要三台Ubuntu服务器(一主二从),然后在每台服务器中分别安装containerd 1.6.6、kubeadm(1.24.2)、kubectl(1.24.2)和kubelet(1.24.2)。

以下操作如无特殊说明,三台机器都需要执行。


 

1. 环境初始化



 1.1 检查并升级操作系统软件包

    cat etc/issue     #当前版本为Ubuntu 22.04 LTS


    #查看内核版本

      uname -r     #当前内核版本为5.15.0-25-generic


      #更换阿里云源前备份source.list

        sudo cp etc/apt/sources.list{,.`date "+%Y%m%d"`}   #备份为当天日期文件


        ls -lrt etc/apt/sources.list.`date "+%Y%m%d"` #确保备份文件已经存在,类似sources.list.20220630


        #更改源为阿里云的源,加快升级速度

          sudo sed -i 's#http://archive.ubuntu.com#http://mirrors.aliyun.com#g' etc/apt/sources.list    #Ubuntu Server 22.04 LTS


          #[更新新系统源软件清单]

            sudo apt update -y


            #查看可更新软件

              sudo apt list --upgradable


              #[更新软件包]

              #更新系统软件至最新稳定版本

                sudo apt upgrade -y


                #[重启系统]

                #某些情况下更新了内核,会提示建议重启系统,此时会使用最新的内核启动

                  sudo systemctl reboot


                  #确保所有系统内核版本相同

                    uname -a            #更新后内核为5.15.0-40-generic


                    如果你已经安装了kubeadm,执行 apt-get update && apt-get upgrade 或 yum update 以获取 kubeadm 的最新版本。

                    升级时,kubelet 每隔几秒钟重新启动一次, 在 crashloop 状态中等待 kubeadm 发布指令。crashloop 状态是正常现象。初始化控制平面后,kubelet 将正常运行。


                    1.2 系统提示符设置[可选]:

                      echo "export PS1='\[\033[35m\][\[\033[00m\]\[\033[31m\]\u\[\033[33m\]\[\033[33m\]@\[\033[03m\]\[\033[35m\]\h\[\033[00m\] \[\033[32m\]\w\[\033[00m\]\[\033[35m\]]\[\033[00m\]\[\033[31m\]\\$\[\033[00m\] '" >> ~/.bash_profile
                      source ~/.bash_profile


                      #覆盖vimrc文件[可选]

                        tee ~/.vimrc << EOF
                        set nu
                        set paste
                        set expandtab
                        set tabstop=2
                        EOF


                        #在 vim 中,tab 字符默认的显示宽度是 8 个空格。在终端打开 vim,多按几次 Tab 键,很容易就超出终端的显示宽度。为了良好的编码体验,需要调整 tab 字符的显示宽度。如果不想插入 tab 字符,还需要把 tab 字符替换成空格。

                        #expandtab 选项把插入的 tab 字符替换成特定数目的空格。具体空格数目跟 tabstop 选项值有关

                        #tabstop 选项设置 tab 字符的显示宽度为多少个空格,默认值是 8。可以使用下面命令修改这个选项值为 2:

                        #:set tabstop=2


                        1.3 主机名解析

                        #为方便后面集群节点间的直接调用,在这里配置主机名解析,企业中推荐使用DNS服务器。

                        #设置每台主机的主机名,可以设置为短主机名和长域名形式,本次采用长域名形式

                          sudo hostnamectl set-hostname master1.rundba.com
                          sudo hostnamectl set-hostname node1.rundba.com
                          sudo hostnamectl set-hostname node2.rundba.com


                          #添加每台主机的hosts解析

                            sudo tee -a etc/hosts << EOF


                            192.168.80.125 master1.rundba.com master1
                            192.168.80.126 node1.rundba.com node1
                            192.168.80.127 node2.rundba.com node2
                            EOF


                            1.4 等效性配置(可选)

                            #配置完等效性后可以免密登录,便于后续操作维护方便

                            #每台创建rsa公钥和私钥

                              ssh-keygen


                              #每台私钥传输到master1节点

                                ssh-copy-id master1


                                #master1上的authorized传输到其它节点

                                  for i in node1 node2 ; do scp ~/.ssh/authorized_keys $i:~/.ssh/ ; done


                                  #ssh等效性验证

                                    for i in master1 node1 node2
                                    do
                                    #echo $i && ssh $i date
                                    echo -e "\033[31m $i: \033[0m `ssh $i date`"
                                    done

                                    #也可以用脚本一键创建从master1到其它节点的等效性:

                                      sshStepV3.sh


                                      #加wx:landnow获取,链接如下:

                                      0179.U Ubuntu上ssh等效性一键脚本自动配置


                                      1.5 时间同步

                                      #K8S要求集群中的时间节点必须一致,这里使用chronyd(ntpd也可以)进行与阿里云ntp server同步,生产环境可使用自己的时间服务器。

                                      #安装chrony

                                        sudo apt install -y chrony


                                        #注释掉所有以pool开头的

                                          sudo sed -i 's/^pool/#pool/g' etc/chrony/chrony.conf


                                          #新增两台阿里云ntp

                                            sudo sed -i '/iburst maxsources 2/a server ntp1.aliyun.com iburst\nserver ntp2.aliyun.com iburst' etc/chrony/chrony.conf


                                            #重启并查看服务

                                              sudo systemctl restart chronyd && sudo systemctl enable chronyd && sudo systemctl status chronyd


                                              #如果时区不对,设置为东八区

                                                sudo timedatectl set-timezone Asia/Shanghai


                                                #查看系统时间并确定时间同步

                                                  date && chronyc sources -v


                                                  1.6 禁用防火墙服务

                                                  #K8S和docker在运行中会产生大量的iptables规则,为了不让系统规则跟他们混淆,直接关闭系统的规则。

                                                  #安装防火墙

                                                    sudo apt install -y ufw


                                                    #停止并禁用开机启动

                                                      sudo systemctl stop ufw ; sudo systemctl disable ufw ; sudo ufw status


                                                      1.7 禁用selinux

                                                      #selinux是linux系统下的一个安全服务,建议关闭。

                                                      #安装

                                                        sudo apt install -y policycoreutils
                                                        sestatus #查看状态
                                                        setenforce 0 #临时禁用
                                                        sudo sed -i 's#=permissive#=disabled#g' etc/selinux/config #修改配置,永久禁用
                                                        sestatus #查看,确保状态为disabled


                                                        1.8 禁用swap分区

                                                        #临时关闭swap

                                                          sudo swapoff -a      #从/proc/swaps中禁止所有swap
                                                          sudo swapon -s #显示swap已使用的情况
                                                          free -h #检查swap使用,swap一行值为0B表示当前已禁用


                                                          #永久关闭swap

                                                            sudo sed -ri 's/.swap./#&/' etc/fstab


                                                            #确认swap一行已经注释

                                                              grep swap etc/fstab       #/swap.img  none  swap  sw  0  0


                                                              1.9 允许 iptables 检查桥接流量

                                                              #官方参考:

                                                                #https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/


                                                                #为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

                                                                  cat <<EOF | sudo tee etc/modules-load.d/k8s.conf
                                                                  overlay
                                                                  br_netfilter
                                                                  EOF


                                                                  #显式加载模块

                                                                    sudo modprobe overlay ; sudo modprobe br_netfilter


                                                                    #设置所需的 sysctl 参数,参数在重新启动后保持不变

                                                                      cat <<EOF | sudo tee etc/sysctl.d/k8s.conf
                                                                      net.bridge.bridge-nf-call-iptables = 1
                                                                      net.bridge.bridge-nf-call-ip6tables = 1
                                                                      net.ipv4.ip_forward = 1
                                                                      EOF


                                                                      #应用 sysctl 参数而不重新启动

                                                                        sudo sysctl --system


                                                                        #查看是否加载

                                                                          [tt@master1 ~]$ lsmod | grep 'br_netfilter\|overlay'
                                                                          overlay 147456 0
                                                                          br_netfilter 28672 0
                                                                          bridge 299008 1 br_netfilter


                                                                          1.10 开启ipvs

                                                                          # 在kubernetes中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块。


                                                                          # 在每个节点安装ipset和ipvsadm:

                                                                            sudo apt install -y ipset ipvsadm


                                                                            # 配置加载模块

                                                                              cat <<EOF | sudo tee etc/modules-load.d/k8s.conf
                                                                              overlay
                                                                              br_netfilter
                                                                              EOF


                                                                                sudo tee etc/modules-load.d/ipvs.conf << EOF
                                                                                modprobe -- ip_vs
                                                                                modprobe -- ip_vs_rr
                                                                                modprobe -- ip_vs_wrr
                                                                                modprobe -- ip_vs_sh
                                                                                modprobe -- nf_conntrack
                                                                                EOF


                                                                                # 临时加载

                                                                                  sudo modprobe -- ip_vs
                                                                                  sudo modprobe -- ip_vs_rr
                                                                                  sudo modprobe -- ip_vs_wrr
                                                                                  sudo modprobe -- ip_vs_sh


                                                                                  # 移除模块参考(不执行该操作)

                                                                                    sudo rmmod nf_conntrack_ipv4


                                                                                    # 内核版本大于4.19,使用nf_conntrack,之前版本用nf_conntrack_ipv4模块,

                                                                                    # 否则会提示modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory lib/modules/5.15.0-40-generic

                                                                                      kernel_version=$(uname -r | cut -d- -f1) && echo $kernel_version


                                                                                      if [ `expr $kernel_version \> 4.19` -eq 1 ]
                                                                                      then
                                                                                      sudo modprobe -- nf_conntrack
                                                                                      else
                                                                                      sudo modprobe -- nf_conntrack_ipv4
                                                                                      fi


                                                                                      # 加载并查看模块

                                                                                        sudo bash etc/modules-load.d/ipvs.conf && lsmod | grep -e ip_vs -e nf_conntrack


                                                                                        # 开机加载配置

                                                                                        #将ipvs相关模块加入配置文件中

                                                                                          cat <<EOF | sudo tee -a etc/modules 
                                                                                          ip_vs_sh
                                                                                          ip_vs_wrr
                                                                                          ip_vs_rr
                                                                                          ip_vs
                                                                                          nf_conntrack
                                                                                          EOF


                                                                                          # 检查是否添加

                                                                                            [tt@master1 ~]$ cat -n etc/modules
                                                                                            1 # etc/modules: kernel modules to load at boot time.
                                                                                            2 #
                                                                                            3 # This file contains the names of kernel modules that should be loaded
                                                                                            4 # at boot time, one per line. Lines beginning with "#" are ignored.
                                                                                            5
                                                                                            6 ip_vs_sh
                                                                                            7 ip_vs_wrr
                                                                                            8 ip_vs_rr
                                                                                            9 ip_vs
                                                                                            10 nf_conntrack


                                                                                            1.11 重启三台机器-确保上述配置均已正常

                                                                                            #重启三台Linux机器:

                                                                                              sudo reboot


                                                                                               

                                                                                              2. 每个节点安装Docker、kubeadm、kubelet和kubectl


                                                                                               

                                                                                              2.1 安装 containerd

                                                                                              1) 添加阿里云containerd镜像

                                                                                              #添加key

                                                                                                [tt@master1 ~]$ sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
                                                                                                Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
                                                                                                OK


                                                                                                #设置阿里云镜像源

                                                                                                  [tt@master1 ~]$ sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
                                                                                                  Repository: 'deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu jammy stable'
                                                                                                  Description:
                                                                                                  Archive for codename: jammy components: stable
                                                                                                  More info: https://mirrors.aliyun.com/docker-ce/linux/ubuntu
                                                                                                  Adding repository.
                                                                                                  Press [ENTER] to continue or Ctrl-c to cancel. #回车继续
                                                                                                  Adding deb entry to etc/apt/sources.list.d/archive_uri-https_mirrors_aliyun_com_docker-ce_linux_ubuntu-jammy.list
                                                                                                  Adding disabled deb-src entry to etc/apt/sources.list.d/archive_uri-https_mirrors_aliyun_com_docker-ce_linux_ubuntu-jammy.list
                                                                                                  Hit:1 http://mirrors.aliyun.com/ubuntu jammy InRelease
                                                                                                  Get:2 https://mirrors.aliyun.com/docker-ce/linux/ubuntu jammy InRelease [48.9 kB]
                                                                                                  Get:3 http://mirrors.aliyun.com/ubuntu jammy-updates InRelease [109 kB]
                                                                                                  Get:4 http://mirrors.aliyun.com/ubuntu jammy-backports InRelease [99.8 kB]
                                                                                                  Get:5 https://mirrors.aliyun.com/docker-ce/linux/ubuntu jammy/stable amd64 Packages [6121 B]
                                                                                                  Get:6 http://mirrors.aliyun.com/ubuntu jammy-security InRelease [110 kB]
                                                                                                  Get:7 http://mirrors.aliyun.com/ubuntu jammy-updates/main amd64 Packages [330 kB]
                                                                                                  Get:8 http://mirrors.aliyun.com/ubuntu jammy-updates/universe amd64 Packages [134 kB]
                                                                                                  Fetched 839 kB in 2s (491 kB/s)
                                                                                                  Reading package lists... Done
                                                                                                  W: https://mirrors.aliyun.com/docker-ce/linux/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.


                                                                                                  #查看已经添加

                                                                                                    [tt@master1 ~]$ cat etc/apt/sources.list.d/archive_uri-https_mirrors_aliyun_com_docker-ce_linux_ubuntu-jammy.list
                                                                                                    deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu jammy stable
                                                                                                    # deb-src [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu jammy stable


                                                                                                    #更新软件

                                                                                                      sudo apt update -y


                                                                                                      2) 安装containerd(所有节点)

                                                                                                      安装最新版containerd

                                                                                                        [tt@master1 ~]$ sudo apt install -y containerd.io
                                                                                                        Reading package lists... Done
                                                                                                        Building dependency tree... Done
                                                                                                        Reading state information... Done
                                                                                                        The following NEW packages will be installed:
                                                                                                        containerd.io
                                                                                                        0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
                                                                                                        Need to get 28.1 MB of archives.
                                                                                                        After this operation, 127 MB of additional disk space will be used.
                                                                                                        Get:1 https://mirrors.aliyun.com/docker-ce/linux/ubuntu jammy/stable amd64 containerd.io amd64 1.6.6-1 [28.1 MB]
                                                                                                        Fetched 28.1 MB in 45s (630 kB/s)
                                                                                                        debconf: delaying package configuration, since apt-utils is not installed
                                                                                                        Selecting previously unselected package containerd.io.
                                                                                                        (Reading database ... 66270 files and directories currently installed.)
                                                                                                        Preparing to unpack .../containerd.io_1.6.6-1_amd64.deb ...
                                                                                                        Unpacking containerd.io (1.6.6-1) ...
                                                                                                        Setting up containerd.io (1.6.6-1) ...
                                                                                                        Created symlink etc/systemd/system/multi-user.target.wants/containerd.service → lib/systemd/system/containerd.service.
                                                                                                        debconf: unable to initialize frontend: Dialog
                                                                                                        debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 78.)
                                                                                                        debconf: falling back to frontend: Readline
                                                                                                        Scanning processes...
                                                                                                        Scanning linux images...


                                                                                                        Running kernel seems to be up-to-date.


                                                                                                        No services need to be restarted.


                                                                                                        No containers need to be restarted.


                                                                                                        No user sessions are running outdated binaries.


                                                                                                        No VM guests are running outdated hypervisor (qemu) binaries on this host.


                                                                                                        #也可选择指定版本安装(可选)

                                                                                                          #sudo apt install containerd.io=1.6.4-1


                                                                                                          3) 生成containerd默认配置文件

                                                                                                            sudo cp etc/containerd/config.toml etc/containerd/config.toml.orig
                                                                                                            containerd config default | sudo tee etc/containerd/config.toml
                                                                                                            sudo systemctl daemon-reload && sudo systemctl restart containerd.service
                                                                                                            systemctl status containerd #当前服务已经启动


                                                                                                            4) 修改config.toml文件

                                                                                                            vi etc/containerd/config.toml

                                                                                                            ① 路径配置-保留默认,不需要修改

                                                                                                            如果专门配置了硬盘运行contrainerd,则可以设置下面两个参数

                                                                                                                6 root = "/var/lib/containerd"        #默认值,containerd持久化目录
                                                                                                              7 state = "/run/containerd" #默认值,containerd运行时状态信息


                                                                                                              ② sandbox_image镜像源设置为阿里云google_containers镜像源

                                                                                                                 61     sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"   #sandbox_image = "k8s.gcr.io/pause:3.6"



                                                                                                                sudo sed -i "s#k8s.gcr.io/pause:3.6#registry.aliyuncs.com/google_containers/pause:3.7#g" etc/containerd/config.toml


                                                                                                                ③ 修改Systemdcgroup

                                                                                                                  114           [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                                                                                                                  ...
                                                                                                                  123 Root = ""
                                                                                                                  124 ShimCgroup = ""
                                                                                                                  125 SystemdCgroup = false #第125行,注意匹配参数大小写
                                                                                                                  #或
                                                                                                                  sudo sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' etc/containerd/config.toml


                                                                                                                  ④ endpoint加速器

                                                                                                                  docker.io镜像拉取速度,同样我们需要为其设置阿里镜像加速

                                                                                                                    153       [plugins."io.containerd.grpc.v1.cri".registry.mirrors]      #添加下面两行
                                                                                                                    154 [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
                                                                                                                    155 endpoint = ["https://ul2pzi84.mirror.aliyuncs.com"]
                                                                                                                    #或
                                                                                                                    sudo sed -i '/registry.mirrors]/a\ \ \ \ \ \ \ \ [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]' etc/containerd/config.toml
                                                                                                                    sudo sed -i '/registry.mirrors."docker.io"]/a\ \ \ \ \ \ \ \ \ \ endpoint = ["https://ul2pzi84.mirror.aliyuncs.com"]' /etc/containerd/config.toml


                                                                                                                    5) 重启containerd(所有节点)

                                                                                                                    #重新加载并重启containerd

                                                                                                                      [tt@master1 ~]$ sudo systemctl daemon-reload && sudo systemctl restart containerd


                                                                                                                      #查看containerd状态确保无报错

                                                                                                                        [tt@master1 ~]$ systemctl status containerd
                                                                                                                        ● containerd.service - containerd container runtime
                                                                                                                        Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
                                                                                                                        Active: active (running) since Sat 2022-07-02 11:47:05 CST; 1min 17s ago
                                                                                                                        Docs: https://containerd.io
                                                                                                                        Process: 3432 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
                                                                                                                        Main PID: 3433 (containerd)
                                                                                                                        Tasks: 16
                                                                                                                        Memory: 21.0M
                                                                                                                        CPU: 606ms
                                                                                                                        CGroup: system.slice/containerd.service
                                                                                                                        └─3433 usr/bin/containerd


                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.390593124+08:00" level=info msg="Start subscribing containerd event"
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.390733954+08:00" level=info msg="Start recovering state"
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.390879015+08:00" level=info msg="Start event monitor"
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.390655259+08:00" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.390909520+08:00" level=info msg="Start snapshots syncer"
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.391106326+08:00" level=info msg="Start cni network conf syncer for default"
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.391142321+08:00" level=info msg="Start streaming server"
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.391017037+08:00" level=info msg=serving... address=/run/containerd/containerd.sock
                                                                                                                        Jul 02 11:47:05 master1.rundba.com containerd[3433]: time="2022-07-02T11:47:05.391410997+08:00" level=info msg="containerd successfully booted in 0.030277s"
                                                                                                                        Jul 02 11:47:05 master1.rundba.com systemd[1]: Started containerd container runtime.


                                                                                                                        2.2 添加阿里云kubernetes镜像仓库地址

                                                                                                                        #为确保安装速度,使用国内kubernetes软件源,此处选择阿里的资源

                                                                                                                        #安装curl等相关的软件

                                                                                                                          sudo apt-get install -y apt-transport-https ca-certificates curl


                                                                                                                          #下载镜像key文件

                                                                                                                            sudo curl -fsSLo usr/share/keyrings/kubernetes-archive-keyring.gpg https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg


                                                                                                                            #添加阿里云kubernetes镜像

                                                                                                                              echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee etc/apt/sources.list.d/kubernetes.list


                                                                                                                              #更新软件

                                                                                                                                sudo apt update -y


                                                                                                                                2.3 安装kubeadm、kubelet和kubectl

                                                                                                                                #安装最新版本kubeadm、kubeadm、kubectl

                                                                                                                                  sudo apt install -y kubelet kubeadm kubectl


                                                                                                                                  #也可安装指定版本(本次不执行)

                                                                                                                                    #sudo apt-get install -y kubelet=1.23.8-00 kubeadm=1.23.8-00 kubectl=1.23.8-00


                                                                                                                                    #标记阻止自动更新

                                                                                                                                      sudo apt-mark hold kubelet kubeadm kubectl


                                                                                                                                      #查看当前支持哪些版本:

                                                                                                                                        [tt@master1 ~]$ apt list kubelet -a  | head          #CentOS上查看yum list kubelet --showduplicates | sort -r


                                                                                                                                        WARNING: apt does not have a stable CLI interface. Use with caution in scripts.


                                                                                                                                        Listing...
                                                                                                                                        kubelet/kubernetes-xenial,now 1.24.2-00 amd64 [installed]
                                                                                                                                        kubelet/kubernetes-xenial 1.24.1-00 amd64
                                                                                                                                        kubelet/kubernetes-xenial 1.24.0-00 amd64
                                                                                                                                        kubelet/kubernetes-xenial 1.23.8-00 amd64
                                                                                                                                        kubelet/kubernetes-xenial 1.23.7-00 amd64
                                                                                                                                        kubelet/kubernetes-xenial 1.23.6-00 amd64
                                                                                                                                        kubelet/kubernetes-xenial 1.23.5-00 amd64
                                                                                                                                        kubelet/kubernetes-xenial 1.23.4-00 amd64
                                                                                                                                        kubelet/kubernetes-xenial 1.23.3-00 amd64


                                                                                                                                        #查看kubelet服务,确保当前开机自启动。当前kubelet服务启动失败,因没有生成配置文件,当集群初始化后kubelet会自动启动

                                                                                                                                          systemctl status kubelet


                                                                                                                                          # 设置crictl

                                                                                                                                            cat << EOF | sudo tee -a etc/crictl.yaml
                                                                                                                                            runtime-endpoint: unix:///var/run/containerd/containerd.sock
                                                                                                                                            image-endpoint: unix:///var/run/containerd/containerd.sock
                                                                                                                                            timeout: 10
                                                                                                                                            debug: false
                                                                                                                                            EOF


                                                                                                                                            2.4 查看k8s所需镜像

                                                                                                                                            当前支持镜像版本为v1.24.2

                                                                                                                                              [tt@master1 ~]$ kubeadm config images list
                                                                                                                                              k8s.gcr.io/kube-apiserver:v1.24.2
                                                                                                                                              k8s.gcr.io/kube-controller-manager:v1.24.2
                                                                                                                                              k8s.gcr.io/kube-scheduler:v1.24.2
                                                                                                                                              k8s.gcr.io/kube-proxy:v1.24.2
                                                                                                                                              k8s.gcr.io/pause:3.7
                                                                                                                                              k8s.gcr.io/etcd:3.5.3-0
                                                                                                                                              k8s.gcr.io/coredns/coredns:v1.8.6


                                                                                                                                              2.5 部署k8s的Master节点(二选一)

                                                                                                                                              部署k8s的第一个Master节点,可以采用命令行方式,也可以采用配置文件方式,二选一。

                                                                                                                                              2.5.1 采用命令行方式部署k8s的第一个Master节点(192.168.80.125)

                                                                                                                                              #1) 官方参考

                                                                                                                                                https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node


                                                                                                                                                #2) 提前下载镜像

                                                                                                                                                由于网络原因,不能直接连接kubernetes仓库k8s.gcr.io时,可以提前下载镜像到本地

                                                                                                                                                下载镜像:

                                                                                                                                                  sudo kubeadm config images list --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
                                                                                                                                                  sudo kubeadm config images pull --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers


                                                                                                                                                  初始化:

                                                                                                                                                    sudo kubeadm init \
                                                                                                                                                    --apiserver-advertise-address=192.168.80.125 \
                                                                                                                                                    --image-repository=registry.aliyuncs.com/google_containers \
                                                                                                                                                    --kubernetes-version v1.24.2 \
                                                                                                                                                    --service-cidr=10.96.0.0/12 \
                                                                                                                                                    --pod-network-cidr=10.244.0.0/16 \
                                                                                                                                                    --cri-socket=/run/containerd/containerd.sock \
                                                                                                                                                    -v=5


                                                                                                                                                    如:

                                                                                                                                                      [tt@master1 ~]$ sudo kubeadm init \
                                                                                                                                                      > --apiserver-advertise-address=192.168.80.125 \
                                                                                                                                                      > --image-repository=registry.aliyuncs.com/google_containers \
                                                                                                                                                      > --kubernetes-version v1.24.2 \
                                                                                                                                                      > --service-cidr=10.96.0.0/12 \
                                                                                                                                                      > --pod-network-cidr=10.244.0.0/16 \
                                                                                                                                                      > --cri-socket=/run/containerd/containerd.sock \
                                                                                                                                                      > -v=5
                                                                                                                                                      W0702 13:50:30.352878 13010 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
                                                                                                                                                      I0702 13:50:30.353005 13010 kubelet.go:214] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
                                                                                                                                                      [init] Using Kubernetes version: v1.24.2
                                                                                                                                                      [preflight] Running pre-flight checks
                                                                                                                                                      I0702 13:50:30.360822 13010 checks.go:570] validating Kubernetes and kubeadm version
                                                                                                                                                      I0702 13:50:30.360868 13010 checks.go:170] validating if the firewall is enabled and active
                                                                                                                                                      I0702 13:50:30.374613 13010 checks.go:205] validating availability of port 6443
                                                                                                                                                      I0702 13:50:30.375061 13010 checks.go:205] validating availability of port 10259
                                                                                                                                                      I0702 13:50:30.375199 13010 checks.go:205] validating availability of port 10257
                                                                                                                                                      I0702 13:50:30.375261 13010 checks.go:282] validating the existence of file etc/kubernetes/manifests/kube-apiserver.yaml
                                                                                                                                                      I0702 13:50:30.375374 13010 checks.go:282] validating the existence of file etc/kubernetes/manifests/kube-controller-manager.yaml
                                                                                                                                                      I0702 13:50:30.375403 13010 checks.go:282] validating the existence of file etc/kubernetes/manifests/kube-scheduler.yaml
                                                                                                                                                      I0702 13:50:30.375420 13010 checks.go:282] validating the existence of file etc/kubernetes/manifests/etcd.yaml
                                                                                                                                                      I0702 13:50:30.375446 13010 checks.go:432] validating if the connectivity type is via proxy or direct
                                                                                                                                                      I0702 13:50:30.375504 13010 checks.go:471] validating http connectivity to first IP address in the CIDR
                                                                                                                                                      I0702 13:50:30.375550 13010 checks.go:471] validating http connectivity to first IP address in the CIDR
                                                                                                                                                      I0702 13:50:30.375588 13010 checks.go:106] validating the container runtime
                                                                                                                                                      I0702 13:50:30.404422 13010 checks.go:331] validating the contents of file proc/sys/net/bridge/bridge-nf-call-iptables
                                                                                                                                                      I0702 13:50:30.405574 13010 checks.go:331] validating the contents of file proc/sys/net/ipv4/ip_forward
                                                                                                                                                      I0702 13:50:30.405712 13010 checks.go:646] validating whether swap is enabled or not
                                                                                                                                                      I0702 13:50:30.405841 13010 checks.go:372] validating the presence of executable crictl
                                                                                                                                                      I0702 13:50:30.405888 13010 checks.go:372] validating the presence of executable conntrack
                                                                                                                                                      I0702 13:50:30.405931 13010 checks.go:372] validating the presence of executable ip
                                                                                                                                                      I0702 13:50:30.406008 13010 checks.go:372] validating the presence of executable iptables
                                                                                                                                                      I0702 13:50:30.406098 13010 checks.go:372] validating the presence of executable mount
                                                                                                                                                      I0702 13:50:30.406136 13010 checks.go:372] validating the presence of executable nsenter
                                                                                                                                                      I0702 13:50:30.406205 13010 checks.go:372] validating the presence of executable ebtables
                                                                                                                                                      I0702 13:50:30.406276 13010 checks.go:372] validating the presence of executable ethtool
                                                                                                                                                      I0702 13:50:30.406311 13010 checks.go:372] validating the presence of executable socat
                                                                                                                                                      I0702 13:50:30.406378 13010 checks.go:372] validating the presence of executable tc
                                                                                                                                                      I0702 13:50:30.406431 13010 checks.go:372] validating the presence of executable touch
                                                                                                                                                      I0702 13:50:30.406461 13010 checks.go:518] running all checks
                                                                                                                                                      [WARNING SystemVerification]: missing optional cgroups: blkio
                                                                                                                                                      I0702 13:50:30.433178 13010 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
                                                                                                                                                      I0702 13:50:30.433655 13010 checks.go:612] validating kubelet version
                                                                                                                                                      I0702 13:50:30.569849 13010 checks.go:132] validating if the "kubelet" service is enabled and active
                                                                                                                                                      I0702 13:50:30.585039 13010 checks.go:205] validating availability of port 10250
                                                                                                                                                      I0702 13:50:30.585143 13010 checks.go:205] validating availability of port 2379
                                                                                                                                                      I0702 13:50:30.585203 13010 checks.go:205] validating availability of port 2380
                                                                                                                                                      I0702 13:50:30.585240 13010 checks.go:245] validating the existence and emptiness of directory var/lib/etcd
                                                                                                                                                      [preflight] Pulling images required for setting up a Kubernetes cluster
                                                                                                                                                      [preflight] This might take a minute or two, depending on the speed of your internet connection
                                                                                                                                                      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
                                                                                                                                                      I0702 13:50:30.585382 13010 checks.go:834] using image pull policy: IfNotPresent
                                                                                                                                                      I0702 13:50:30.609848 13010 checks.go:851] pulling: registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.2
                                                                                                                                                      I0702 13:50:30.882045 13010 checks.go:851] pulling: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2
                                                                                                                                                      I0702 13:50:31.084508 13010 checks.go:851] pulling: registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.2
                                                                                                                                                      I0702 13:51:01.342084 13010 checks.go:851] pulling: registry.aliyuncs.com/google_containers/kube-proxy:v1.24.2
                                                                                                                                                      I0702 13:51:01.526476 13010 checks.go:851] pulling: registry.aliyuncs.com/google_containers/pause:3.7
                                                                                                                                                      I0702 13:51:01.717803 13010 checks.go:851] pulling: registry.aliyuncs.com/google_containers/etcd:3.5.3-0
                                                                                                                                                      I0702 13:51:01.924041 13010 checks.go:851] pulling: registry.aliyuncs.com/google_containers/coredns:v1.8.6
                                                                                                                                                      [certs] Using certificateDir folder "/etc/kubernetes/pki"
                                                                                                                                                      I0702 13:51:02.110272 13010 certs.go:112] creating a new certificate authority for ca
                                                                                                                                                      [certs] Generating "ca" certificate and key
                                                                                                                                                      I0702 13:51:02.336125 13010 certs.go:522] validating certificate period for ca certificate
                                                                                                                                                      [certs] Generating "apiserver" certificate and key
                                                                                                                                                      [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1.rundba.com] and IPs [10.96.0.1 192.168.80.125]
                                                                                                                                                      [certs] Generating "apiserver-kubelet-client" certificate and key
                                                                                                                                                      I0702 13:51:02.874627 13010 certs.go:112] creating a new certificate authority for front-proxy-ca
                                                                                                                                                      [certs] Generating "front-proxy-ca" certificate and key
                                                                                                                                                      I0702 13:51:03.057974 13010 certs.go:522] validating certificate period for front-proxy-ca certificate
                                                                                                                                                      [certs] Generating "front-proxy-client" certificate and key
                                                                                                                                                      I0702 13:51:03.190117 13010 certs.go:112] creating a new certificate authority for etcd-ca
                                                                                                                                                      [certs] Generating "etcd/ca" certificate and key
                                                                                                                                                      I0702 13:51:03.338003 13010 certs.go:522] validating certificate period for etcd/ca certificate
                                                                                                                                                      [certs] Generating "etcd/server" certificate and key
                                                                                                                                                      [certs] etcd/server serving cert is signed for DNS names [localhost master1.rundba.com] and IPs [192.168.80.125 127.0.0.1 ::1]
                                                                                                                                                      [certs] Generating "etcd/peer" certificate and key
                                                                                                                                                      [certs] etcd/peer serving cert is signed for DNS names [localhost master1.rundba.com] and IPs [192.168.80.125 127.0.0.1 ::1]
                                                                                                                                                      [certs] Generating "etcd/healthcheck-client" certificate and key
                                                                                                                                                      [certs] Generating "apiserver-etcd-client" certificate and key
                                                                                                                                                      I0702 13:51:04.589533 13010 certs.go:78] creating new public/private key files for signing service account users
                                                                                                                                                      [certs] Generating "sa" key and public key
                                                                                                                                                      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
                                                                                                                                                      I0702 13:51:04.774040 13010 kubeconfig.go:103] creating kubeconfig file for admin.conf
                                                                                                                                                      [kubeconfig] Writing "admin.conf" kubeconfig file
                                                                                                                                                      I0702 13:51:05.068300 13010 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
                                                                                                                                                      [kubeconfig] Writing "kubelet.conf" kubeconfig file
                                                                                                                                                      I0702 13:51:05.494652 13010 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
                                                                                                                                                      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
                                                                                                                                                      I0702 13:51:05.684487 13010 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
                                                                                                                                                      [kubeconfig] Writing "scheduler.conf" kubeconfig file
                                                                                                                                                      I0702 13:51:05.925050 13010 kubelet.go:65] Stopping the kubelet
                                                                                                                                                      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                                                                                                                                                      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                                                                                                                                                      [kubelet-start] Starting the kubelet
                                                                                                                                                      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
                                                                                                                                                      [control-plane] Creating static Pod manifest for "kube-apiserver"
                                                                                                                                                      I0702 13:51:06.350193 13010 manifests.go:99] [control-plane] getting StaticPodSpecs
                                                                                                                                                      I0702 13:51:06.350615 13010 certs.go:522] validating certificate period for CA certificate
                                                                                                                                                      I0702 13:51:06.350759 13010 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
                                                                                                                                                      I0702 13:51:06.350780 13010 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
                                                                                                                                                      I0702 13:51:06.350799 13010 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
                                                                                                                                                      I0702 13:51:06.350816 13010 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
                                                                                                                                                      I0702 13:51:06.350876 13010 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
                                                                                                                                                      I0702 13:51:06.356269 13010 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
                                                                                                                                                      [control-plane] Creating static Pod manifest for "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.356422 13010 manifests.go:99] [control-plane] getting StaticPodSpecs
                                                                                                                                                      I0702 13:51:06.356834 13010 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.356890 13010 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.356961 13010 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.357021 13010 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.357081 13010 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.357111 13010 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.357139 13010 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
                                                                                                                                                      I0702 13:51:06.358495 13010 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
                                                                                                                                                      [control-plane] Creating static Pod manifest for "kube-scheduler"
                                                                                                                                                      I0702 13:51:06.358608 13010 manifests.go:99] [control-plane] getting StaticPodSpecs
                                                                                                                                                      I0702 13:51:06.359191 13010 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
                                                                                                                                                      I0702 13:51:06.360056 13010 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
                                                                                                                                                      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
                                                                                                                                                      I0702 13:51:06.361622 13010 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
                                                                                                                                                      I0702 13:51:06.361681 13010 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
                                                                                                                                                      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
                                                                                                                                                      [apiclient] All control plane components are healthy after 7.506070 seconds
                                                                                                                                                      I0702 13:51:13.869853 13010 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
                                                                                                                                                      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
                                                                                                                                                      I0702 13:51:13.882187 13010 uploadconfig.go:124] [upload-config] Uploading the kubelet component config to a ConfigMap
                                                                                                                                                      [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
                                                                                                                                                      I0702 13:51:13.893058 13010 uploadconfig.go:129] [upload-config] Preserving the CRISocket information for the control-plane node
                                                                                                                                                      I0702 13:51:13.893117 13010 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "master1.rundba.com" as an annotation
                                                                                                                                                      [upload-certs] Skipping phase. Please see --upload-certs
                                                                                                                                                      [mark-control-plane] Marking the node master1.rundba.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
                                                                                                                                                      [mark-control-plane] Marking the node master1.rundba.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
                                                                                                                                                      [bootstrap-token] Using token: oo9zli.nbk9av0cx8agd8t7
                                                                                                                                                      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
                                                                                                                                                      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
                                                                                                                                                      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
                                                                                                                                                      [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
                                                                                                                                                      [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
                                                                                                                                                      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
                                                                                                                                                      I0702 13:51:14.941125 13010 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
                                                                                                                                                      I0702 13:51:14.941714 13010 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
                                                                                                                                                      I0702 13:51:14.942044 13010 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
                                                                                                                                                      I0702 13:51:14.945555 13010 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
                                                                                                                                                      I0702 13:51:14.951041 13010 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
                                                                                                                                                      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
                                                                                                                                                      I0702 13:51:14.952369 13010 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
                                                                                                                                                      [addons] Applied essential addon: CoreDNS
                                                                                                                                                      [addons] Applied essential addon: kube-proxy


                                                                                                                                                      Your Kubernetes control-plane has initialized successfully!      #初始化成功


                                                                                                                                                      To start using your cluster, you need to run the following as a regular user:


                                                                                                                                                      mkdir -p $HOME/.kube
                                                                                                                                                      sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
                                                                                                                                                      sudo chown $(id -u):$(id -g) $HOME/.kube/config


                                                                                                                                                      Alternatively, if you are the root user, you can run:


                                                                                                                                                        export KUBECONFIG=/etc/kubernetes/admin.conf


                                                                                                                                                      You should now deploy a pod network to the cluster.
                                                                                                                                                      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
                                                                                                                                                      https://kubernetes.io/docs/concepts/cluster-administration/addons/
                                                                                                                                                      Then you can join any number of worker nodes by running the following on each as root:


                                                                                                                                                      kubeadm join 192.168.80.125:6443 --token oo9zli.nbk9av0cx8agd8t7 \
                                                                                                                                                      --discovery-token-ca-cert-hash sha256:e564e8df99820d84d468024d6ca09ee9e4fc14585ce4e8597ce1db9f8d0fa0bb

                                                                                                                                                      单台master节点初始化完成。


                                                                                                                                                      执行失败的话,可以重置

                                                                                                                                                        sudo kubeadm reset


                                                                                                                                                        2.5.2 采用配置文件方式部署k8s的第一个Master节点(192.168.80.125)

                                                                                                                                                        1) 可以生产默认配置文件(不采用)

                                                                                                                                                          sudo kubeadm config print init-defaults > kubeadm.yaml
                                                                                                                                                          --vim kubeadm.yaml #编辑参考
                                                                                                                                                          -- 12 advertiseAddress: 192.168.80.125 #修改1.2.3.4为当前master IP
                                                                                                                                                          -- 15 criSocket: unix:///var/run/containerd/containerd.sock #修改/run/containerd/containerd.sock
                                                                                                                                                          -- 17 name: master1 #默认为node,修改为当前master主机名(hostname显示值)
                                                                                                                                                          -- 30 imageRepository: registry.aliyuncs.com/google_containers #修改imageRepository: k8s.gcr.io
                                                                                                                                                          -- 32 kubernetesVersion: 1.24.2 #指定安装版本
                                                                                                                                                          -- 35 serviceSubnet: 10.96.0.0/12 #在下面一行添加pod-network-cidr
                                                                                                                                                          -- 36 pod-network-cidr: 10.244.0.0/16 #和35行同一级别
                                                                                                                                                          -- #添加ipvs段和systemd段


                                                                                                                                                          2) 直接生成已编辑好的配置文件

                                                                                                                                                            cat > kubeadm.yaml << EOF
                                                                                                                                                            apiVersion: kubeadm.k8s.io/v1beta3
                                                                                                                                                            bootstrapTokens:
                                                                                                                                                            - groups:
                                                                                                                                                            - system:bootstrappers:kubeadm:default-node-token
                                                                                                                                                            token: abcdef.0123456789abcdef
                                                                                                                                                            ttl: 24h0m0s
                                                                                                                                                            usages:
                                                                                                                                                            - signing
                                                                                                                                                            - authentication
                                                                                                                                                            kind: InitConfiguration
                                                                                                                                                            localAPIEndpoint:
                                                                                                                                                            advertiseAddress: `hostname -I` #1. auto change master_ip
                                                                                                                                                            bindPort: 6443
                                                                                                                                                            nodeRegistration:
                                                                                                                                                            criSocket: unix:///var/run/containerd/containerd.sock
                                                                                                                                                            imagePullPolicy: IfNotPresent
                                                                                                                                                            name: master1 #2. change your master server hostname
                                                                                                                                                            taints:
                                                                                                                                                            - effect: "NoSchedule"
                                                                                                                                                            key: "node-role.kubernetes.io/master"
                                                                                                                                                            ---
                                                                                                                                                            apiServer:
                                                                                                                                                            timeoutForControlPlane: 4m0s
                                                                                                                                                            apiVersion: kubeadm.k8s.io/v1beta3
                                                                                                                                                            certificatesDir: /etc/kubernetes/pki
                                                                                                                                                            clusterName: kubernetes
                                                                                                                                                            controllerManager: {}
                                                                                                                                                            dns: {}
                                                                                                                                                            etcd:
                                                                                                                                                            local:
                                                                                                                                                            dataDir: /var/lib/etcd
                                                                                                                                                            imageRepository: registry.aliyuncs.com/google_containers #3.change registry
                                                                                                                                                            kind: ClusterConfiguration
                                                                                                                                                            kubernetesVersion: v1.24.2 #4. change version
                                                                                                                                                            networking:
                                                                                                                                                            dnsDomain: cluster.local
                                                                                                                                                            serviceSubnet: 10.96.0.0/12
                                                                                                                                                            podSubnet: 10.244.0.0/16 #5. add section
                                                                                                                                                            scheduler: {}
                                                                                                                                                            --- #6. add section,use ipvs
                                                                                                                                                            apiVersion: kubeproxy.config.k8s.io/v1alpha1
                                                                                                                                                            kind: KubeProxyConfiguration
                                                                                                                                                            mode: ipvs
                                                                                                                                                            --- #7. add section,use systemd
                                                                                                                                                            apiVersion: kubelet.config.k8s.io/v1beta1
                                                                                                                                                            kind: KubeletConfiguration
                                                                                                                                                            cgroupDriver: systemd
                                                                                                                                                            EOF


                                                                                                                                                            3) 预拉取镜像

                                                                                                                                                              sudo kubeadm config images pull --config kubeadm.yaml


                                                                                                                                                              4) 初始第一个master

                                                                                                                                                                sudo kubeadm init --config=kubeadm.yaml -v=5 | tee kubeadm`date "+%Y%m%d-%H%M%S"`.log


                                                                                                                                                                5) 配置环境变量(192.168.80.125)

                                                                                                                                                                #根据提示消息,在Master节点上如果以普通用户使用kubectl工具,需要执行如下操作:

                                                                                                                                                                  mkdir -p $HOME/.kube
                                                                                                                                                                  sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
                                                                                                                                                                  sudo chown $(id -u):$(id -g) $HOME/.kube/config


                                                                                                                                                                  #或者,使用root用户,需要设置环境变量(仅在当前会话临时生效)

                                                                                                                                                                    export KUBECONFIG=/etc/kubernetes/admin.conf


                                                                                                                                                                    #设置root用户环境变量(永久有效)

                                                                                                                                                                      echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bash_profile
                                                                                                                                                                      source /root/.bash_profile
                                                                                                                                                                      echo $KUBECONFIG


                                                                                                                                                                      2.6 部署k8s的Node节点

                                                                                                                                                                      #根据第一个master初始化提示,在192.168.80.126和192.168.80.127上添加node

                                                                                                                                                                      #加入,需要添加--v=5参数,显示详细信息

                                                                                                                                                                        [tt@node1 ~]$ sudo kubeadm join cluster-endpoint:6443 --token abcdef.0123456789abcdef \
                                                                                                                                                                        --discovery-token-ca-cert-hash sha256:b685553dd5ec6bfcded92c3f8ee5e05dac1260b23a7f5c37ba149b0b8ed9820d
                                                                                                                                                                        accepts at most 1 arg(s), received 3
                                                                                                                                                                        To see the stack trace of this error execute with --v=5 or higher


                                                                                                                                                                        如:

                                                                                                                                                                          [tt@node1 ~]$ sudo kubeadm join 192.168.80.125:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:b685553dd5ec6bfcded92c3f8ee5e05dac1260b23a7f5c37ba149b0b8ed9820d -v=5
                                                                                                                                                                          I0702 14:02:59.806558 13732 join.go:413] [preflight] found NodeName empty; using OS hostname as NodeName
                                                                                                                                                                          I0702 14:02:59.806972 13732 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
                                                                                                                                                                          [preflight] Running pre-flight checks
                                                                                                                                                                          I0702 14:02:59.807117 13732 preflight.go:92] [preflight] Running general checks
                                                                                                                                                                          I0702 14:02:59.807192 13732 checks.go:282] validating the existence of file /etc/kubernetes/kubelet.conf
                                                                                                                                                                          I0702 14:02:59.807238 13732 checks.go:282] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
                                                                                                                                                                          I0702 14:02:59.807261 13732 checks.go:106] validating the container runtime
                                                                                                                                                                          I0702 14:02:59.829323 13732 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
                                                                                                                                                                          I0702 14:02:59.829444 13732 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
                                                                                                                                                                          I0702 14:02:59.829520 13732 checks.go:646] validating whether swap is enabled or not
                                                                                                                                                                          I0702 14:02:59.829616 13732 checks.go:372] validating the presence of executable crictl
                                                                                                                                                                          I0702 14:02:59.829649 13732 checks.go:372] validating the presence of executable conntrack
                                                                                                                                                                          I0702 14:02:59.829682 13732 checks.go:372] validating the presence of executable ip
                                                                                                                                                                          I0702 14:02:59.829703 13732 checks.go:372] validating the presence of executable iptables
                                                                                                                                                                          I0702 14:02:59.829756 13732 checks.go:372] validating the presence of executable mount
                                                                                                                                                                          I0702 14:02:59.829784 13732 checks.go:372] validating the presence of executable nsenter
                                                                                                                                                                          I0702 14:02:59.829829 13732 checks.go:372] validating the presence of executable ebtables
                                                                                                                                                                          I0702 14:02:59.829875 13732 checks.go:372] validating the presence of executable ethtool
                                                                                                                                                                          I0702 14:02:59.829901 13732 checks.go:372] validating the presence of executable socat
                                                                                                                                                                          I0702 14:02:59.829945 13732 checks.go:372] validating the presence of executable tc
                                                                                                                                                                          I0702 14:02:59.829979 13732 checks.go:372] validating the presence of executable touch
                                                                                                                                                                          I0702 14:02:59.830051 13732 checks.go:518] running all checks
                                                                                                                                                                          [WARNING SystemVerification]: missing optional cgroups: blkio
                                                                                                                                                                          I0702 14:02:59.856819 13732 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
                                                                                                                                                                          I0702 14:02:59.857165 13732 checks.go:612] validating kubelet version
                                                                                                                                                                          I0702 14:03:00.011934 13732 checks.go:132] validating if the "kubelet" service is enabled and active
                                                                                                                                                                          I0702 14:03:00.027108 13732 checks.go:205] validating availability of port 10250
                                                                                                                                                                          I0702 14:03:00.027538 13732 checks.go:282] validating the existence of file /etc/kubernetes/pki/ca.crt
                                                                                                                                                                          I0702 14:03:00.027611 13732 checks.go:432] validating if the connectivity type is via proxy or direct
                                                                                                                                                                          I0702 14:03:00.027699 13732 join.go:530] [preflight] Discovering cluster-info
                                                                                                                                                                          I0702 14:03:00.027773 13732 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "192.168.80.125:6443"
                                                                                                                                                                          I0702 14:03:00.050484 13732 token.go:118] [discovery] Requesting info from "192.168.80.125:6443" again to validate TLS against the pinned public key
                                                                                                                                                                          I0702 14:03:00.061151 13732 token.go:135] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.80.125:6443"
                                                                                                                                                                          I0702 14:03:00.061210 13732 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
                                                                                                                                                                          I0702 14:03:00.061335 13732 join.go:544] [preflight] Fetching init configuration
                                                                                                                                                                          I0702 14:03:00.061366 13732 join.go:590] [preflight] Retrieving KubeConfig objects
                                                                                                                                                                          [preflight] Reading configuration from the cluster...
                                                                                                                                                                          [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
                                                                                                                                                                          I0702 14:03:00.072542 13732 kubelet.go:88] attempting to download the KubeletConfiguration from the new format location (UnversionedKubeletConfigMap=true)
                                                                                                                                                                          I0702 14:03:00.077817 13732 interface.go:432] Looking for default routes with IPv4 addresses
                                                                                                                                                                          I0702 14:03:00.077852 13732 interface.go:437] Default route transits interface "bond0"
                                                                                                                                                                          I0702 14:03:00.078183 13732 interface.go:209] Interface bond0 is up
                                                                                                                                                                          I0702 14:03:00.078261 13732 interface.go:257] Interface "bond0" has 2 addresses :[192.168.80.126/24 fe80::d422:baff:fe4b:f61b/64].
                                                                                                                                                                          I0702 14:03:00.078296 13732 interface.go:224] Checking addr 192.168.80.126/24.
                                                                                                                                                                          I0702 14:03:00.078308 13732 interface.go:231] IP found 192.168.80.126
                                                                                                                                                                          I0702 14:03:00.078317 13732 interface.go:263] Found valid IPv4 address 192.168.80.126 for interface "bond0".
                                                                                                                                                                          I0702 14:03:00.078325 13732 interface.go:443] Found active IP 192.168.80.126
                                                                                                                                                                          I0702 14:03:00.082566 13732 preflight.go:103] [preflight] Running configuration dependant checks
                                                                                                                                                                          I0702 14:03:00.082671 13732 controlplaneprepare.go:220] [download-certs] Skipping certs download
                                                                                                                                                                          I0702 14:03:00.082751 13732 kubelet.go:119] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
                                                                                                                                                                          I0702 14:03:00.083719 13732 kubelet.go:134] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
                                                                                                                                                                          I0702 14:03:00.084825 13732 kubelet.go:155] [kubelet-start] Checking for an existing Node in the cluster with name "node1.rundba.com" and status "Ready"
                                                                                                                                                                          I0702 14:03:00.088187 13732 kubelet.go:170] [kubelet-start] Stopping the kubelet
                                                                                                                                                                          [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                                                                                                                                                                          [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                                                                                                                                                                          [kubelet-start] Starting the kubelet
                                                                                                                                                                          [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
                                                                                                                                                                          I0702 14:03:05.581654 13732 cert_rotation.go:137] Starting client certificate rotation controller
                                                                                                                                                                          I0702 14:03:05.583013 13732 kubelet.go:218] [kubelet-start] preserving the crisocket information for the node
                                                                                                                                                                          I0702 14:03:05.583105 13732 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "node1.rundba.com" as an annotation


                                                                                                                                                                          This node has joined the cluster:
                                                                                                                                                                          * Certificate signing request was sent to apiserver and a response was received.
                                                                                                                                                                          * The Kubelet was informed of the new secure connection details.


                                                                                                                                                                          Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


                                                                                                                                                                          默认的token有效期为24小时,当过期之后,该token就不能用了,这时可以使用如下的命令创建token:

                                                                                                                                                                            kubeadm token create --print-join-command


                                                                                                                                                                            # 生成一个永不过期的token

                                                                                                                                                                              kubeadm token create --ttl 0 --print-join-command


                                                                                                                                                                              #如果添加失败,可以删除临时配置后,重新进行添加

                                                                                                                                                                                sudo rm -rf /etc/kubernetes/


                                                                                                                                                                                2.7 部署CNI网络插件

                                                                                                                                                                                #在Master节点上使用kubectl工具查看节点状态:

                                                                                                                                                                                  [tt@master1 ~]$ kubectl get node
                                                                                                                                                                                  NAME STATUS ROLES AGE VERSION
                                                                                                                                                                                  master1 NotReady control-plane 15m v1.24.2
                                                                                                                                                                                  node1 NotReady <none> 14m v1.24.2
                                                                                                                                                                                  node2 NotReady <none> 5m49s v1.24.2


                                                                                                                                                                                  #kubernetes支持多种网络插件,比如flannel、calico、canal等,任选一种即可,本次选择flannel,如果网络不行,可以许大仙提供的kube-flannel.yml,当然,你也可以安装calico,请点这里calico.yaml,推荐安装calico,大型网络建议使用cilium。

                                                                                                                                                                                  #使用flannel[可选]

                                                                                                                                                                                  ● 在Master节点上获取flannel配置文件(可能会失败,如果失败,请下载到本地,然后安装):

                                                                                                                                                                                    [tt@master1 ~]$ wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


                                                                                                                                                                                    如,以下载到本地的kube-flannel.yml启动flannel:

                                                                                                                                                                                      [tt@master1 ~]$ kubectl apply -f kube-flannel.yml
                                                                                                                                                                                      Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
                                                                                                                                                                                      podsecuritypolicy.policy/psp.flannel.unprivileged created
                                                                                                                                                                                      clusterrole.rbac.authorization.k8s.io/flannel created
                                                                                                                                                                                      clusterrolebinding.rbac.authorization.k8s.io/flannel created
                                                                                                                                                                                      serviceaccount/flannel created
                                                                                                                                                                                      configmap/kube-flannel-cfg created
                                                                                                                                                                                      daemonset.apps/kube-flannel-ds created


                                                                                                                                                                                      #也可使用calico(推荐)

                                                                                                                                                                                        wget -c https://docs.projectcalico.org/manifests/calico.yaml
                                                                                                                                                                                        kubectl apply -f calico.yaml


                                                                                                                                                                                        #查看部署CNI网络插件进度

                                                                                                                                                                                        #当kube-system命名空间中所有的pod都为Running时,则网络插件运行正常

                                                                                                                                                                                          [tt@master1 ~]$ kubectl get pods -n kube-system
                                                                                                                                                                                          NAME READY STATUS RESTARTS AGE
                                                                                                                                                                                          coredns-6d8c4cb4d-k5drx 1/1 Running 0 55m
                                                                                                                                                                                          coredns-6d8c4cb4d-vmz46 1/1 Running 0 55m
                                                                                                                                                                                          etcd-master1 1/1 Running 0 56m
                                                                                                                                                                                          kube-apiserver-master1 1/1 Running 0 56m
                                                                                                                                                                                          kube-controller-manager-master1 1/1 Running 0 56m
                                                                                                                                                                                          kube-flannel-ds-c8jg7 1/1 Running 0 4m8s
                                                                                                                                                                                          kube-flannel-ds-s2q6d 1/1 Running 0 4m8s
                                                                                                                                                                                          kube-flannel-ds-t98m6 1/1 Running 0 4m8s
                                                                                                                                                                                          kube-proxy-4dw2j 1/1 Running 0 39m
                                                                                                                                                                                          kube-proxy-q6ljp 1/1 Running 0 42m
                                                                                                                                                                                          kube-proxy-rqgkf 1/1 Running 0 55m
                                                                                                                                                                                          kube-scheduler-master1 1/1 Running 0 56m


                                                                                                                                                                                          #再次在Master节点使用kubectl工具查看节点状态:

                                                                                                                                                                                            [tt@master1 ~]$ kubectl get nodes
                                                                                                                                                                                            NAME STATUS ROLES AGE VERSION
                                                                                                                                                                                            master1 Ready control-plane,master 57m v1.24.2
                                                                                                                                                                                            node1 Ready <none> 43m v1.24.2
                                                                                                                                                                                            node2 Ready <none> 41m v1.24.2


                                                                                                                                                                                            #查看集群健康状况

                                                                                                                                                                                              [tt@master1 ~]$ kubectl get cs
                                                                                                                                                                                              Warning: v1 ComponentStatus is deprecated in v1.19+
                                                                                                                                                                                              NAME STATUS MESSAGE ERROR
                                                                                                                                                                                              scheduler Healthy ok
                                                                                                                                                                                              controller-manager Healthy ok
                                                                                                                                                                                              etcd-0 Healthy {"health":"true","reason":""}


                                                                                                                                                                                              #查看集群信息

                                                                                                                                                                                                [tt@master1 ~]$ kubectl cluster-info
                                                                                                                                                                                                Kubernetes control plane is running at https://192.168.80.125:6443
                                                                                                                                                                                                CoreDNS is running at https://192.168.80.125:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy




                                                                                                                                                                                                To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


                                                                                                                                                                                                 

                                                                                                                                                                                                3. 服务部署


                                                                                                                                                                                                 

                                                                                                                                                                                                3.1 前言

                                                                                                                                                                                                在Kubernetes集群中部署一个Nginx程序,测试下集群是否正常工作。


                                                                                                                                                                                                3.2 步骤

                                                                                                                                                                                                #部署Nginx

                                                                                                                                                                                                  [tt@master1 ~]$ kubectl create deployment nginx --image=nginx:1.14-alpine
                                                                                                                                                                                                  deployment.apps/nginx created


                                                                                                                                                                                                  #暴露端口

                                                                                                                                                                                                    [tt@master1 ~]$ kubectl expose deployment nginx --port=80 --type=NodePort
                                                                                                                                                                                                    service/nginx exposed


                                                                                                                                                                                                    #查看服务状态

                                                                                                                                                                                                      [tt@master1 ~]$ kubectl get pods,svc
                                                                                                                                                                                                      NAME READY STATUS RESTARTS AGE
                                                                                                                                                                                                      pod/nginx-7cbb8cd5d8-6p9rr 1/1 Running 0 69s




                                                                                                                                                                                                      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
                                                                                                                                                                                                      service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 61m
                                                                                                                                                                                                      service/nginx NodePort 10.108.254.134 <none> 80:31950/TCP 46s


                                                                                                                                                                                                      #访问nginx(使用master/node1/node2 IP地址均可,端口使用80映射的31950):

                                                                                                                                                                                                        http://192.168.80.125:31950        #此时页面nginx页面可正常打开


                                                                                                                                                                                                         

                                                                                                                                                                                                        4. kubernetes中kubectl命令自动补全


                                                                                                                                                                                                         

                                                                                                                                                                                                        kubectl 为 Bash、Zsh、Fish 和 PowerShell 提供了自动补全支持,可以为节省大量输入。

                                                                                                                                                                                                        可以使用命令生成 Bash 的 kubectl 的自动补全功能,补全脚本依赖于bash-completion,下文以Bash设置自动补全功能进行演示。

                                                                                                                                                                                                        Fish 和 Zsh 设置自动完成的过程,请参考文末链接。


                                                                                                                                                                                                        4.1 安装bash-completion

                                                                                                                                                                                                          sudo apt install -y bash-completion


                                                                                                                                                                                                          4.2 配置bash补全

                                                                                                                                                                                                          1) 用户下补全方法(二选一)

                                                                                                                                                                                                            echo 'source /usr/share/bash-completion/bash_completion' >> ~/.bashrc
                                                                                                                                                                                                            echo 'source <(kubectl completion bash)' >> ~/.bashrc


                                                                                                                                                                                                            2) 系统补全方法(二选一)

                                                                                                                                                                                                              kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null


                                                                                                                                                                                                              4.3 kubectl别名设置

                                                                                                                                                                                                              如果需要kubectl的别名,可以扩展 shell 补全以使用该别名:

                                                                                                                                                                                                                echo 'alias k=kubectl' >> ~/.bashrc
                                                                                                                                                                                                                echo 'complete -F __start_kubectl k' >> ~/.bashrc


                                                                                                                                                                                                                4.4 注意

                                                                                                                                                                                                                bash-completion 将所有完成脚本都放在/etc/bash_completion.d.

                                                                                                                                                                                                                两种方法是等效的。重新加载 shell 后,kubectl 自动完成功能应该可以工作了。


                                                                                                                                                                                                                4.5 验证kubectl自动补全功能

                                                                                                                                                                                                                  [tt@master1 ~]$ . .bashrc    #加载配置,使配置生效
                                                                                                                                                                                                                  [tt@master1 ~]$ k v #输入k v,点击tab键,此时会自动补全参数version的全部信息
                                                                                                                                                                                                                  [tt@master1 ~]$ k version


                                                                                                                                                                                                                   

                                                                                                                                                                                                                  5. 参考


                                                                                                                                                                                                                   

                                                                                                                                                                                                                    https://github.com/containerd/containerd/blob/main/docs/getting-started.md
                                                                                                                                                                                                                    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
                                                                                                                                                                                                                    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node
                                                                                                                                                                                                                    https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/
                                                                                                                                                                                                                    https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd




                                                                                                                                                                                                                    kubectl自动补全参考:
                                                                                                                                                                                                                    https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/




                                                                                                                                                                                                                    kubernetes网络插件参考:
                                                                                                                                                                                                                    https://kubernetes.io/docs/concepts/cluster-administration/addons/



                                                                                                                                                                                                                    往期推荐

                                                                                                                                                                                                                    0181.K 升级Kubernetes集群的Docker和Containerd版本

                                                                                                                                                                                                                    0180.K kubelet.go 2466 Error getting node not found

                                                                                                                                                                                                                    0176.K 设置master调度_去除master污点

                                                                                                                                                                                                                    0175.K delete pv状态状态一直为Terminating解决方法

                                                                                                                                                                                                                    0174.K kuboard监控套件安装

                                                                                                                                                                                                                    0173.K pod日志提示persistentvolumeclaim not found解决方法

                                                                                                                                                                                                                    0172.K pod日志提示pod has unbound immediate PersistentVolumeClaims解决

                                                                                                                                                                                                                    0171.K pod日志提示open prometheus queries.active permission denied解决

                                                                                                                                                                                                                    0170.K K8S增加node节点

                                                                                                                                                                                                                    0169.K K8S集群删除与添加节点

                                                                                                                                                                                                                    0168.K k8s增加node资源后,显示资源没有更新解决方法

                                                                                                                                                                                                                    0165.K docker login报错x509: certificate relies on legacy...处理记录

                                                                                                                                                                                                                    0164.K starting Harbor non-overlapping IPv4 address pool among..

                                                                                                                                                                                                                    0163.K 在CentOS上使用Harbor搭建K8S/docker私有镜像仓库

                                                                                                                                                                                                                    0158.K 升级kubernetes集群_多主多从

                                                                                                                                                                                                                    0157.K 升级 kubeadm 集群_一主两从

                                                                                                                                                                                                                    0156.K  kubeadm安装高可用K8S集群(2/2)

                                                                                                                                                                                                                    0155.K kubeadm安装高可用K8S集群(1/2)

                                                                                                                                                                                                                    0154.K master初始化后_kube-proxy状态一直为CrashLoopBackOff处理记录

                                                                                                                                                                                                                    0152.KK8S中安装/升级/卸载 Kuboard v3

                                                                                                                                                                                                                    0151.K 升级kuboard(内建用户库方式)

                                                                                                                                                                                                                    0150.K 安装kuboard(内建用户库方式)

                                                                                                                                                                                                                    0147.k kubernetes 3节点实验环境安装




                                                                                                                                                                                                                    文章转载自rundba,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

                                                                                                                                                                                                                    评论