暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

0147.k kubernetes 3节点实验环境安装

rundba 2022-03-10
1128

本次环境搭建需要三台CentOS服务器(一主二从),然后在每台服务器中分别安装Docker(20.10.12)、kubeadm(1.23.4)、kubectl(1.23.4)和kubelet(1.23.4)。

0. ENV

0.1 软件版本

CentOS 7.6;

docker 20.10.12;

kubernetes 1.23.4-0(kubelet kubeadm kubectl)。

截止2022-03-08采用的docker及K8S版本。

0.2 集群类型

● Kubernetes集群大致分为两类:一主多从和多主多从。

● 一主多从:一个Master节点和多台Node节点,搭建简单,但是有单机故障风险,适合用于测试环境(本次采用方式)。

● 多主多从:多台Master和多台Node节点,搭建麻烦,安全性高,适合用于生产环境。

0.3 安装方式

● kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包。

① minikube:一个用于快速搭建单节点的kubernetes工具。

② kubeadm:一个用于快速搭建kubernetes集群的工具。

③ 二进制包:从官网上下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效。

● 我们需要安装kubernetes的集群环境,但是又不想过于麻烦,所以选择kubeadm方式。

0.4 主机规划

角色 IP地址 操作系统 配置

k8s3_master 192.168.80.125 CentOS7.6 基础设施服务器 2核CPU,2G内存,50G硬盘

k8s3_node1 192.168.80.126 CentOS7.6 基础设施服务器 2核CPU,2G内存,50G硬盘

k8s3_node2 192.168.80.127 CentOS7.6 基础设施服务器 2核CPU,2G内存,50G硬盘

0.5 环境说明

当前作为实验学习环境,仅1台master,生产环境建议多主部署。

● 本次环境搭建需要三台CentOS服务器(一主二从),然后在每台服务器中分别安装Docker(20.10.12)、kubeadm(1.23.4)、kubectl(1.23.4)和kubelet(1.23.4)。

没有特殊说明,就是三台机器都需要执行。

1. 环境初始化

1) 检查操作系统版本

此方式下安装kubernetes集群要求CentOS版本要在7.5或以上

    [root@k8s3_master ~]# cat etc/redhat-release 
    CentOS Linux release 7.6.1810 (Core)

    2) 主机名解析

    为方便后面集群节点间的直接调用,在这里配置主机名解析,企业中推荐使用DNS服务器。

      vi etc/hosts
      192.168.80.125 k8s3_master.rundba.com k8s3_master
      192.168.80.126 k8s3_node1.rundba.com k8s3_node1
      192.168.80.127 k8s3_node2.rundba.com k8s3_node2

      3) 时间同步

      K8S要求集群中的时间节点必须一致,这里使用chronyd进行同步,生产环境建议使用自己的时间服务器。

        systemctl start chronyd
        systemctl enable chronyd
        date

        如果时区不对,设置为东八区:

          timedatectl set-timezone Asia/Shanghai

          4) 禁用iptables和firewalld服务

          K8S和docker在运行中会产生大量的iptables规则,为了不让系统规则跟他们混淆,直接关闭系统的规则。

            systemctl stop firewalld
            systemctl disable firewalld

            --一般iptables服务不会启动

              --systemctl stop iptables
              --systemctl disable iptables

              5) 禁用selinux

              selinux是linux系统下的一个安全服务,建议关闭。

              修改配置文件永久关闭selinux,需要重启生效

                [root@k8s3_node1 ~]# sed -i 's/=enforcing/=disabled/g' etc/selinux/config

                临时关闭selinux

                  [root@k8s3_node1 ~]# setenforce 0
                  [root@k8s3_node1 ~]# getenforce
                  Permissive

                  查看配置文件确认已修改

                    [root@k8s3_node1 ~]# grep SELINUX= etc/selinux/config 
                    # SELINUX= can take one of these three values:
                    SELINUX=disabled

                    6) 禁用swap分区

                    临时关闭swap,需要重启

                      [root@dorisdb2 ~]# swapoff -a
                      [root@dorisdb2 ~]# swapon -s
                      [root@dorisdb2 ~]# free -h
                      total used free shared buff/cache available
                      Mem: 31G 346M 30G 8.8M 373M 30G
                      Swap: 0B 0B 0B

                      永久关闭swap

                        echo vm.swappiness=0 >> etc/sysctl.conf
                        systemctl -p

                        同时fstab文件中注释掉swap一行

                          #/dev/mapper/centos-swap swap                    swap    defaults        0 0

                          7) 将桥接的IPv4流量传递到iptables的链

                          在每个节点上将桥接的IPv4流量传递到iptables的链:

                            cat > /etc/sysctl.d/k8s.conf << EOF
                            net.bridge.bridge-nf-call-ip6tables = 1
                            net.bridge.bridge-nf-call-iptables = 1
                            net.ipv4.ip_forward = 1
                            EOF

                            # 加载br_netfilter模块

                              modprobe br_netfilter

                              # 查看是否加载

                                [root@k8s3-master ~]# lsmod | grep br_netfilter
                                br_netfilter 22256 0
                                bridge 151336 1 br_netfilter

                                # 生效

                                  sysctl --system

                                  8) 开启ipvs

                                  ● 在kubernetes中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块。

                                  在每个节点安装ipset和ipvsadm:

                                    yum -y install ipset ipvsadm

                                    创建配置文件

                                      cat > etc/sysconfig/modules/ipvs.modules <<EOF
                                      #!/bin/bash
                                      modprobe -- ip_vs
                                      modprobe -- ip_vs_rr
                                      modprobe -- ip_vs_wrr
                                      modprobe -- ip_vs_sh
                                      modprobe -- nf_conntrack_ipv4
                                      EOF

                                      授权、运行、检查是否加载:

                                        chmod 755 etc/sysconfig/modules/ipvs.modules && bash etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

                                        检查是否加载:

                                          lsmod | grep -e ipvs -e nf_conntrack_ipv4

                                          9) 重启三台机器

                                          重启三台Linux机器:

                                            reboot

                                            2. 每个节点安装Docker、kubeadm、kubelet和kubectl

                                            2.1 安装Docker

                                            1) 安装Docker

                                            使用阿里云docker镜像

                                              wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O etc/yum.repos.d/docker-ce.repo

                                              查看支持的docker的镜像版本:

                                                yum list docker-ce --showduplicates

                                                安装最新版本:

                                                  yum -y install docker-ce

                                                  --假如需要安装指定版本,请指定版本号:(必须指定--setopt-obsoletes=0,否则yum会自动安装更高版本)

                                                    --yum -y install --setopt-obsoletes=0 docker-ce-18.06.3.ce-3.el7

                                                    启动docker服务并设置为开机启动

                                                      systemctl enable docker && systemctl start docker

                                                      查看docker版本

                                                        [root@k8s3_master ~]# docker --version
                                                        Docker version 20.10.12, build e91ed57

                                                        2) 设置Docker镜像加速器

                                                        创建配置文件目录:

                                                          sudo mkdir -p /etc/docker

                                                          创建配置文件:

                                                            sudo tee etc/docker/daemon.json <<-'EOF'
                                                            {
                                                            "exec-opts": ["native.cgroupdriver=systemd"],
                                                            "registry-mirrors": ["https://du3ia00u.mirror.aliyuncs.com"],
                                                            "live-restore": true,
                                                            "log-driver":"json-file",
                                                            "log-opts": {"max-size":"500m", "max-file":"3"},
                                                            "storage-driver": "overlay2"
                                                            }
                                                            EOF

                                                            重新加载服务变更

                                                              sudo systemctl daemon-reload

                                                              重启docker服务

                                                                sudo systemctl restart docker

                                                                2.2 添加阿里云的YUM软件源

                                                                由于kubernetes的镜像源在国外,非常慢,这里切换成国内的阿里云镜像源:

                                                                  cat > etc/yum.repos.d/kubernetes.repo << EOF
                                                                  [kubernetes]
                                                                  name=Kubernetes
                                                                  baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
                                                                  enabled=1
                                                                  gpgcheck=0
                                                                  repo_gpgcheck=0
                                                                  gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
                                                                  EOF

                                                                  2.3 安装kubeadm、kubelet和kubectl

                                                                  安装最新版本(截止2022-03-07最新版本为1.23.4):

                                                                    yum install -y kubelet kubeadm kubectl

                                                                    查看当前支持哪些版本:

                                                                      yum list kubelet --showduplicates | sort -r

                                                                      --也可以指定版本号部署:

                                                                        --yum -y install --setopt-obsoletes=0 kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0

                                                                        为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,建议修改"/etc/sysconfig/kubelet"文件的内容:

                                                                          vim etc/sysconfig/kubelet        #编辑配置文件修改内容如下:
                                                                          KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
                                                                          KUBE_PROXY_MODE="ipvs"

                                                                          设置为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动:

                                                                            systemctl enable kubelet

                                                                            2.4 查看k8s所需镜像

                                                                            2.4.1 查看k8s所需镜像(本次采用)

                                                                              [root@k8s3_master ~]# kubeadm config images list
                                                                              k8s.gcr.io/kube-apiserver:v1.23.4
                                                                              k8s.gcr.io/kube-controller-manager:v1.23.4
                                                                              k8s.gcr.io/kube-scheduler:v1.23.4
                                                                              k8s.gcr.io/kube-proxy:v1.23.4
                                                                              k8s.gcr.io/pause:3.6
                                                                              k8s.gcr.io/etcd:3.5.1-0
                                                                              k8s.gcr.io/coredns/coredns:v1.8.6

                                                                              2.4.2 查看镜像报错处理

                                                                              1) 报错内容

                                                                              如果查看镜像报错:“could not convert cfg to an internal cfg: nodeRegistration.name ...”

                                                                                [root@k8s3_master ~]# kubeadm config images list
                                                                                could not convert cfg to an internal cfg: nodeRegistration.name: Invalid value: "k8s3_master": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
                                                                                To see the stack trace of this error execute with --v=5 or higher

                                                                                2) 报错原因

                                                                                K8S系统中,遵循RFC 1123标签命名,主机名不能带下划线,可更换为中划线。

                                                                                3) 报错处理

                                                                                # 修改集群内所有带下划线主机名为带中划线的主机名

                                                                                  [root@k8s3_master ~]# hostnamectl set-hostname k8s3-master
                                                                                  # 修改hosts将主机名更改为带中划线的主机名
                                                                                  [root@k8s3_master ~]# vim etc/hosts
                                                                                  127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
                                                                                  ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
                                                                                  192.168.80.125 k8s3-master.rundba.com k8s3-master
                                                                                  192.168.80.126 k8s3-node1.rundba.com k8s3-node1
                                                                                  192.168.80.127 k8s3-node2.rundba.com k8s3-node2

                                                                                  # 再次查看K8S镜像--正常

                                                                                  当前支持镜像版本为v1.23.4

                                                                                    [root@k8s3_master ~]# kubeadm config images list
                                                                                    k8s.gcr.io/kube-apiserver:v1.23.4
                                                                                    k8s.gcr.io/kube-controller-manager:v1.23.4
                                                                                    k8s.gcr.io/kube-scheduler:v1.23.4
                                                                                    k8s.gcr.io/kube-proxy:v1.23.4
                                                                                    k8s.gcr.io/pause:3.6
                                                                                    k8s.gcr.io/etcd:3.5.1-0
                                                                                    k8s.gcr.io/coredns/coredns:v1.8.6

                                                                                    2.4.3 提前下载镜像(本次未采用)

                                                                                    假如由于网络原因,不能直接连接kubernetes仓库时,可以提前下载7个镜像到本地

                                                                                    在shell中执行设置变量:

                                                                                      images=(
                                                                                      kube-apiserver:v1.23.4
                                                                                      kube-controller-manager:v1.23.4
                                                                                      kube-scheduler:v1.23.4
                                                                                      kube-proxy:v1.23.4
                                                                                      pause:3.6
                                                                                      etcd:3.5.1-0
                                                                                      coredns/coredns:v1.8.6
                                                                                      )   

                                                                                      执行脚本:

                                                                                        for imageName in ${images[@]} ; do
                                                                                        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
                                                                                        docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
                                                                                        k8s.gcr.io/$imageName
                                                                                        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
                                                                                        done

                                                                                        2.5 部署k8s的Master节点

                                                                                        部署k8s的Master节点(192.168.80.125):

                                                                                        # 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址

                                                                                          kubeadm init \
                                                                                          --apiserver-advertise-address=192.168.80.125 \
                                                                                          --image-repository registry.aliyuncs.com/google_containers \
                                                                                          --kubernetes-version v1.23.4 \
                                                                                          --service-cidr=10.96.0.0/12 \
                                                                                            --pod-network-cidr=10.244.0.0/16

                                                                                          如:

                                                                                            [root@k8s3-master ~]# kubeadm init \
                                                                                            > --apiserver-advertise-address=192.168.80.125 \ #API Server
                                                                                            > --image-repository registry.aliyuncs.com/google_containers \ #镜像仓库
                                                                                            > --kubernetes-version v1.23.4 \ #指定版本为v1.23.4
                                                                                            > --service-cidr=10.96.0.0/12 \
                                                                                            > --pod-network-cidr=10.244.0.0/16
                                                                                            [init] Using Kubernetes version: v1.23.4
                                                                                            [preflight] Running pre-flight checks
                                                                                            [preflight] Pulling images required for setting up a Kubernetes cluster
                                                                                            [preflight] This might take a minute or two, depending on the speed of your internet connection
                                                                                            [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' #此处占用时间可能较长,请耐心等待
                                                                                            [certs] Using certificateDir folder "/etc/kubernetes/pki"
                                                                                            [certs] Generating "ca" certificate and key
                                                                                            [certs] Generating "apiserver" certificate and key
                                                                                            [certs] apiserver serving cert is signed for DNS names [k8s3-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.80.125]
                                                                                            [certs] Generating "apiserver-kubelet-client" certificate and key
                                                                                            [certs] Generating "front-proxy-ca" certificate and key
                                                                                            [certs] Generating "front-proxy-client" certificate and key
                                                                                            [certs] Generating "etcd/ca" certificate and key
                                                                                            [certs] Generating "etcd/server" certificate and key
                                                                                            [certs] etcd/server serving cert is signed for DNS names [k8s3-master localhost] and IPs [192.168.80.125 127.0.0.1 ::1]
                                                                                            [certs] Generating "etcd/peer" certificate and key
                                                                                            [certs] etcd/peer serving cert is signed for DNS names [k8s3-master localhost] and IPs [192.168.80.125 127.0.0.1 ::1]
                                                                                            [certs] Generating "etcd/healthcheck-client" certificate and key
                                                                                            [certs] Generating "apiserver-etcd-client" certificate and key
                                                                                            [certs] Generating "sa" key and public key
                                                                                            [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
                                                                                            [kubeconfig] Writing "admin.conf" kubeconfig file
                                                                                            [kubeconfig] Writing "kubelet.conf" kubeconfig file
                                                                                            [kubeconfig] Writing "controller-manager.conf" kubeconfig file
                                                                                            [kubeconfig] Writing "scheduler.conf" kubeconfig file
                                                                                            [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                                                                                            [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                                                                                            [kubelet-start] Starting the kubelet
                                                                                            [control-plane] Using manifest folder "/etc/kubernetes/manifests"
                                                                                            [control-plane] Creating static Pod manifest for "kube-apiserver"
                                                                                            [control-plane] Creating static Pod manifest for "kube-controller-manager"
                                                                                            [control-plane] Creating static Pod manifest for "kube-scheduler"
                                                                                            [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
                                                                                            [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
                                                                                            [apiclient] All control plane components are healthy after 7.503806 seconds
                                                                                            [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
                                                                                            [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
                                                                                            NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
                                                                                            [upload-certs] Skipping phase. Please see --upload-certs
                                                                                            [mark-control-plane] Marking the node k8s3-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
                                                                                            [mark-control-plane] Marking the node k8s3-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
                                                                                            [bootstrap-token] Using token: kf6lhi.a63jikpwc7lz8p2q
                                                                                            [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
                                                                                            [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
                                                                                            [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
                                                                                            [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
                                                                                            [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
                                                                                            [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
                                                                                            [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
                                                                                            [addons] Applied essential addon: CoreDNS
                                                                                            [addons] Applied essential addon: kube-proxy
                                                                                            Your Kubernetes control-plane has initialized successfully! #初始化成功

                                                                                            To start using your cluster, you need to run the following as a regular user:

                                                                                            mkdir -p $HOME/.kube
                                                                                            sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
                                                                                            sudo chown $(id -u):$(id -g) $HOME/.kube/config

                                                                                            Alternatively, if you are the root user, you can run:

                                                                                            export KUBECONFIG=/etc/kubernetes/admin.conf

                                                                                            You should now deploy a pod network to the cluster.
                                                                                            Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
                                                                                            https://kubernetes.io/docs/concepts/cluster-administration/addons/

                                                                                            Then you can join any number of worker nodes by running the following on each as root:

                                                                                            kubeadm join 192.168.80.125:6443 --token kf6lhi.a63jikpwc7lz8p2q \
                                                                                            --discovery-token-ca-cert-hash sha256:fc1aa661090d0912d98d029bd8205dd433ae3ac7b297ed97fb733a3238111021

                                                                                            根据提示消息,在Master节点上如果以普通用户使用kubectl工具,普通需要执行如下操作:

                                                                                              mkdir -p $HOME/.kube
                                                                                              sudo cp -i etc/kubernetes/admin.conf $HOME/.kube/config
                                                                                              sudo chown $(id -u):$(id -g) $HOME/.kube/config

                                                                                              或者,因当前使用root用户,需要设置环境变量(仅在当前会话生效):

                                                                                                export KUBECONFIG=/etc/kubernetes/admin.conf

                                                                                                设置root用户环境变量,source加载后生效:

                                                                                                  [root@k8s3-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> root/.bash_profile
                                                                                                  [root@k8s3-master ~]# source root/.bash_profile
                                                                                                  [root@k8s3-master ~]# echo $KUBECONFIG
                                                                                                  /etc/kubernetes/admin.conf

                                                                                                  2.6 部署k8s的Node节点

                                                                                                  根据上述提示,在192.168.80.126和192.168.80.127上添加如下的命令:

                                                                                                    kubeadm join 192.168.80.125:6443 --token kf6lhi.a63jikpwc7lz8p2q \
                                                                                                    --discovery-token-ca-cert-hash sha256:fc1aa661090d0912d98d029bd8205dd433ae3ac7b297ed97fb733a3238111021

                                                                                                    如:

                                                                                                      [root@k8s3-node1 ~]# kubeadm join 192.168.80.125:6443 --token kf6lhi.a63jikpwc7lz8p2q \
                                                                                                      > --discovery-token-ca-cert-hash sha256:fc1aa661090d0912d98d029bd8205dd433ae3ac7b297ed97fb733a3238111021
                                                                                                      [preflight] Running pre-flight checks
                                                                                                      [preflight] Reading configuration from the cluster...
                                                                                                      [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
                                                                                                      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
                                                                                                      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
                                                                                                      [kubelet-start] Starting the kubelet
                                                                                                      [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

                                                                                                      This node has joined the cluster:
                                                                                                      * Certificate signing request was sent to apiserver and a response was received.
                                                                                                      * The Kubelet was informed of the new secure connection details.

                                                                                                      Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

                                                                                                      默认的token有效期为24小时,当过期之后,该token就不能用了,这时可以使用如下的命令创建token:

                                                                                                        kubeadm token create --print-join-command

                                                                                                        # 生成一个永不过期的token

                                                                                                          kubeadm token create --ttl 0 --print-join-command

                                                                                                          2.7 部署CNI网络插件

                                                                                                          ● 根据提示,在Master节点上使用kubectl工具查看节点状态:

                                                                                                            [root@k8s3-master ~]# kubectl get nodes
                                                                                                            NAME STATUS ROLES AGE VERSION
                                                                                                            k8s3-master NotReady control-plane,master 17m v1.23.4
                                                                                                            k8s3-node1 NotReady <none> 4m6s v1.23.4
                                                                                                            k8s3-node2 NotReady <none> 100s v1.23.4

                                                                                                            ● kubernetes支持多种网络插件,比如flannel、calico、canal等,任选一种即可,本次选择flannel,如果网络不行,可以许大仙提供的kube-flannel.yml,当然,你也可以安装calico,请点这里calico.yaml,推荐安装calico。

                                                                                                            --注:

                                                                                                            --kube-flannel.yml详参考章节的原文链接

                                                                                                              --https://www.yuque.com/raw?filekey=yuque%2F0%2F2021%2Fyml%2F513185%2F1609860138490-0ef90b45-9b0e-47e2-acfa-0c041f083bf9.yml&from=https%3A%2F%2Fwww.yuque.com%2Ffairy-era%2Fyg511q%2Fhg3u04

                                                                                                              --calico.yaml详参考章节的原文链接

                                                                                                                --https://www.yuque.com/raw?filekey=yuque%2F0%2F2021%2Fyaml%2F513185%2F1612184315393-f2d1b11a-d9fa-481e-ba77-06cf1ab526f0.yaml&from=https%3A%2F%2Fwww.yuque.com%2Ffairy-era%2Fyg511q%2Fhg3u04

                                                                                                                ● 在Master节点上获取flannel配置文件(可能会失败,如果失败,请下载到本地,然后安装):

                                                                                                                  [root@k8s3-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
                                                                                                                  --2022-03-07 16:17:07-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
                                                                                                                  Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ...
                                                                                                                  Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
                                                                                                                  HTTP request sent, awaiting response... 200 OK
                                                                                                                  Length: 5692 (5.6K) [text/plain]
                                                                                                                  Saving to: ‘kube-flannel.yml’

                                                                                                                  100%[=============================================================================================================================>] 5,692 --.-K/s in 0s

                                                                                                                  2022-03-07 16:17:08 (12.7 MB/s) - ‘kube-flannel.yml’ saved [5692/5692]

                                                                                                                  使用配置文件启动flannel(远程加载):

                                                                                                                    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

                                                                                                                    也可以下载到本地的kube-flannel.yml启动flannel(本地加载):

                                                                                                                      [root@k8s3-master ~]# kubectl apply -f kube-flannel.yml
                                                                                                                      Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
                                                                                                                      podsecuritypolicy.policy/psp.flannel.unprivileged created
                                                                                                                      clusterrole.rbac.authorization.k8s.io/flannel created
                                                                                                                      clusterrolebinding.rbac.authorization.k8s.io/flannel created
                                                                                                                      serviceaccount/flannel created
                                                                                                                      configmap/kube-flannel-cfg created
                                                                                                                      daemonset.apps/kube-flannel-ds created

                                                                                                                      查看部署CNI网络插件进度:

                                                                                                                        [root@k8s3-master ~]# kubectl get pods -n kube-system
                                                                                                                        NAME READY STATUS RESTARTS AGE
                                                                                                                        coredns-6d8c4cb4d-k5drx 1/1 Running 0 55m
                                                                                                                        coredns-6d8c4cb4d-vmz46 1/1 Running 0 55m
                                                                                                                        etcd-k8s3-master 1/1 Running 0 56m
                                                                                                                        kube-apiserver-k8s3-master 1/1 Running 0 56m
                                                                                                                        kube-controller-manager-k8s3-master 1/1 Running 0 56m
                                                                                                                        kube-flannel-ds-c8jg7 1/1 Running 0 4m8s
                                                                                                                        kube-flannel-ds-s2q6d 1/1 Running 0 4m8s
                                                                                                                        kube-flannel-ds-t98m6 1/1 Running 0 4m8s
                                                                                                                        kube-proxy-4dw2j 1/1 Running 0 39m
                                                                                                                        kube-proxy-q6ljp 1/1 Running 0 42m
                                                                                                                        kube-proxy-rqgkf 1/1 Running 0 55m
                                                                                                                        kube-scheduler-k8s3-master 1/1 Running 0 56m

                                                                                                                        再次在Master节点使用kubectl工具查看节点状态:

                                                                                                                          [root@k8s3-master ~]# kubectl get nodes
                                                                                                                          NAME STATUS ROLES AGE VERSION
                                                                                                                          k8s3-master Ready control-plane,master 57m v1.23.4
                                                                                                                          k8s3-node1 Ready <none> 43m v1.23.4
                                                                                                                          k8s3-node2 Ready <none> 41m v1.23.4

                                                                                                                          查看集群健康状况:

                                                                                                                            [root@k8s3-master ~]# kubectl get cs
                                                                                                                            Warning: v1 ComponentStatus is deprecated in v1.19+
                                                                                                                            NAME STATUS MESSAGE ERROR
                                                                                                                            scheduler Healthy ok
                                                                                                                            controller-manager Healthy ok
                                                                                                                            etcd-0 Healthy {"health":"true","reason":""}

                                                                                                                            查看集群信息:

                                                                                                                              [root@k8s3-master ~]# kubectl cluster-info
                                                                                                                              Kubernetes control plane is running at https://192.168.80.125:6443
                                                                                                                              CoreDNS is running at https://192.168.80.125:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

                                                                                                                              To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

                                                                                                                              3 服务部署

                                                                                                                              3.1 前言

                                                                                                                              ● 在Kubernetes集群中部署一个Nginx程序,测试下集群是否正常工作。

                                                                                                                              3.2 步骤

                                                                                                                              ● 部署Nginx:

                                                                                                                                [root@k8s3-master ~]# kubectl create deployment nginx --image=nginx:1.14-alpine
                                                                                                                                deployment.apps/nginx created

                                                                                                                                ● 暴露端口:

                                                                                                                                  [root@k8s3-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
                                                                                                                                  service/nginx exposed

                                                                                                                                  ● 查看服务状态:

                                                                                                                                    [root@k8s3-master ~]# kubectl get pods,svc
                                                                                                                                    NAME READY STATUS RESTARTS AGE
                                                                                                                                    pod/nginx-7cbb8cd5d8-6p9rr 1/1 Running 0 69s

                                                                                                                                    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
                                                                                                                                    service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 61m
                                                                                                                                    service/nginx NodePort 10.108.254.134 <none> 80:31950/TCP 46s

                                                                                                                                    访问nginx(使用master/node1/node2 IP地址均可,端口使用80映射的31950):

                                                                                                                                      http://192.168.80.125:31950        #此时页面nginx页面可正常打开



                                                                                                                                      4. kubernetes中kubectl命令自动补全

                                                                                                                                      kubectl 为 Bash、Zsh、Fish 和 PowerShell 提供了自动补全支持,可以为节省大量输入。

                                                                                                                                      可以使用命令生成 Bash 的 kubectl 的自动补全功能,补全脚本依赖于bash-completion,下文以Bash设置自动补全功能进行演示。

                                                                                                                                      Fish 和 Zsh 设置自动完成的过程,请参考文末链接。


                                                                                                                                      4.1 安装bash-completion

                                                                                                                                        yum install -y bash-completion


                                                                                                                                        4.2 配置bash补全

                                                                                                                                        1) 用户下补全方法(二选一)

                                                                                                                                          echo 'source /usr/share/bash-completion/bash_completion' >> ~/.bashrc
                                                                                                                                          echo 'source <(kubectl completion bash)' >> ~/.bashrc


                                                                                                                                          2) 系统补全方法(二选一)

                                                                                                                                            kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null


                                                                                                                                            4.3 kubectl别名设置

                                                                                                                                            如果需要kubectl的别名,可以扩展 shell 补全以使用该别名:

                                                                                                                                              echo 'alias k=kubectl' >> ~/.bashrc
                                                                                                                                              echo 'complete -F __start_kubectl k' >> ~/.bashrc


                                                                                                                                              4.4 注意

                                                                                                                                              bash-completion 将所有完成脚本都放在/etc/bash_completion.d.

                                                                                                                                              两种方法是等效的。重新加载 shell 后,kubectl 自动完成功能应该可以工作了。


                                                                                                                                              4.5 验证kubectl自动补全功能

                                                                                                                                                [root@k8s3-master ~]# . .bashrc    #加载配置,使配置生效
                                                                                                                                                [root@k8s3-master ~]# k v #输入k v,点击tab键,此时会自动补全参数version的全部信息
                                                                                                                                                [root@k8s3-master ~]# k version



                                                                                                                                                5. 参考

                                                                                                                                                  语雀安装kubernetes:
                                                                                                                                                  https://www.yuque.com/fairy-era/yg511q/hg3u04


                                                                                                                                                  JAVA黑马安装kubernetes教程:
                                                                                                                                                  https://www.bilibili.com/video/BV1Qv41167ck?p=7&spm_id_from=pageDriver


                                                                                                                                                  kubectl自动补全参考:
                                                                                                                                                  https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/


                                                                                                                                                  kubernetes网络插件参考:
                                                                                                                                                  https://kubernetes.io/docs/concepts/cluster-administration/addons/





                                                                                                                                                  - 完 -



                                                                                                                                                  旨在交流,不足之处,还望抛砖。

                                                                                                                                                  作者:王坤,微信公众号:rundba,欢迎转载,转载请注明出处。

                                                                                                                                                  如需公众号转发,请联系wx: landnow。




                                                                                                                                                   






                                                                                                                                                                               长按二维码                                   


                                                                                                                                                  欢迎加入>>国产DB学习交流群


                                                                                                                                                         

                                                                                                                                                     请注明:来自rundba,加入国产DB学习交流群                

                                                                                                                                                               


                                                                                                                                                  文章转载自rundba,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

                                                                                                                                                  评论