暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

kubernetes配置之十一:kubernetes网络模型

运维扫盲人 2020-10-31
424

Docker Bridge网络模型中,所有发往Docker主机外部的流量或者接受Docker主机外部的流量都必须经过源地址转换或者目的地址转换;

  • CNI(Container Network Interface)只是一个容器网络谁规范,生产中还需要具体的实现,目前比较流行的实现有Flannel、Calico、Canal、Weave Net。

  • CNI提供三类插件:main、meta、ipam。main主要实现某种特定的网络功能,如loopback、bridge、macvlan、ipvlan等;meta用于调用其他插件,不提供任何网络实现;ipam用于实现分配IP,不提供网络实现。

1、Flannel插件

  • flannel是一个为kubernetes提供叠加网络的网络插件,基于linux TUN/TAP,使用UDP封装IP报文来创建叠加网络,并借助etcd维护网络分配情况;

  • 为了解决地址冲突flannel预留一个网络10.244.0.0/16,为每个Docker容器引擎分配一个子网,例如10.244.1.0/24、10.244.2.0/24;

  • 为了解决个网络之间缺少路由,flannel有三种解决办法:VxLAN、host-gw。

    1. VxLAN:使用隧道封装机制转发Pod之间的通信,是flannel默认模型,VxLAN报文一律不分片,因此有求物理链路层有足够大的MTU值,VxLAN还支持DirectRouting模式,通过添加必要的路由完成二层网络Pod之间额通信,只有在不同的IP网络中才启用VxLAN隧道封装机制;

    2. host-gw:通过在节点上创建到达目标容器地址的路由主节转发报文;

    #VxLAN的flannel配置信息
    https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ym
    net-conf.json: |
    {
    "Network": "10.244.0.0/16",
    "Backend": {
    "Type": "vxlan"
    }
        }
      #VxLAN DirectRouting模型的flannel配置信息
      net-conf.json: |
      {
      "Network": "10.244.0.0/16",
      "Backend": {
      "Type": "vxlan"
      "Directrouting": true
      }
      }
        #host-gw模型的flannel配置信息
        net-conf.json: |
        {
        "Network": "10.244.0.0/16",
        "Backend": {
                "Type": "host-gw"
        }
        }

        2、Canal插件

        • Canal由flannel和Calico联合发布的一个统一网络插件,提供CNI网络插件,并支持网络策略;

        • kubernetes使用标准资源对象NetworkPolicy供管理员按需定义网络访问控制策略,NetworkPolicy为名称空间级别的资源,它的作用范围为所属名称空间

        • Canal插件将Calico与flannel组合部署在一起提供丰富的网络控制策略解决方案,目前Canal直接使用Calico和flannel项目,代码本身没有任何修改;

        • CNI插件需要启用,Calico安装为CNI插件。必须通过传递--network-plugin=cni参数将kubelet配置为使用CNI网络。(在kubeadm上,这是默认设置。)

        • 我们这里使用Kubernetes的etcd进行安装,首先确保Kubernetes设置--cluster-cidr=10.244.0.0/16和--allocate-node-cidrs=true。(kubeadm是默认安装的)

        2.1 创建RBAC

          kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
            [root@k8s-master-01 manifests]# cat  rbac-canal.yaml 
            # {{site.prodname}} Roles
            # Reference {{site.url}}/{{page.version}}/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
            kind: ClusterRole
            apiVersion: rbac.authorization.k8s.io/v1beta1
            metadata:
            name: calico
            rules:
            - apiGroups: [""]
            resources:
            - namespaces
            - serviceaccounts
            verbs:
            - get
            - list
            - watch
            - apiGroups: [""]
            resources:
            - pods/status
            verbs:
            - patch
            - apiGroups: [""]
            resources:
            - pods
            verbs:
            - get
            - list
            - watch
            - apiGroups: [""]
            resources:
            - services
            verbs:
            - get
            - apiGroups: [""]
            resources:
            - endpoints
            verbs:
            - get
            - apiGroups: [""]
            resources:
            - nodes
            verbs:
            - get
            - list
            - update
            - watch
            - apiGroups: ["networking.k8s.io"]
            resources:
            - networkpolicies
            verbs:
            - get
            - list
            - watch
            - apiGroups: ["crd.projectcalico.org"]
            resources:
            - globalfelixconfigs
            - felixconfigurations
            - bgppeers
            - globalbgpconfigs
            - globalnetworksets
            - hostendpoints
            - bgpconfigurations
            - ippools
            - globalnetworkpolicies
            - networkpolicies
            - clusterinformations
            verbs:
            - create
            - get
            - list
            - update
            - watch


            ---


            # Flannel roles
            # Pulled from https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml
            kind: ClusterRole
            apiVersion: rbac.authorization.k8s.io/v1beta1
            metadata:
            name: flannel
            rules:
            - apiGroups:
            - ""
            resources:
            - pods
            verbs:
            - get
            - apiGroups:
            - ""
            resources:
            - nodes
            verbs:
            - list
            - watch
            - apiGroups:
            - ""
            resources:
            - nodes/status
            verbs:
            - patch
            ---


            # Bind the flannel ClusterRole to the canal ServiceAccount.
            kind: ClusterRoleBinding
            apiVersion: rbac.authorization.k8s.io/v1beta1
            metadata:
            name: canal-flannel
            roleRef:
            apiGroup: rbac.authorization.k8s.io
            kind: ClusterRole
            name: flannel
            subjects:
            - kind: ServiceAccount
            name: canal
            namespace: kube-system


            ---


            # Bind the {{site.prodname}} ClusterRole to the canal ServiceAccount.
            apiVersion: rbac.authorization.k8s.io/v1beta1
            kind: ClusterRoleBinding
            metadata:
            name: canal-calico
            roleRef:
            apiGroup: rbac.authorization.k8s.io
            kind: ClusterRole
            name: calico
            subjects:
            - kind: ServiceAccount
            name: canal
            namespace: kube-system

            2.2 安装Canal

              kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml
                [root@k8s-master-01 manifests]# cat canal.yaml 
                # Canal Version v3.3.7
                # https://docs.projectcalico.org/v3.3/releases#v3.3.7
                # This manifest includes the following component versions:
                # calico/node:v3.3.7
                # calico/cni:v3.3.7
                # coreos/flannel:v0.9.1


                # This ConfigMap is used to configure a self-hosted Canal installation.
                kind: ConfigMap
                apiVersion: v1
                metadata:
                name: canal-config
                namespace: kube-system
                data:
                # The interface used by canal for host <-> host communication.
                # If left blank, then the interface is chosen using the node's
                # default route.
                canal_iface: ""


                # Whether or not to masquerade traffic to destinations not within
                # the pod network.
                masquerade: "true"


                # The CNI network configuration to install on each node. The special
                # values in this config will be automatically populated.
                cni_network_config: |-
                {
                "name": "k8s-pod-network",
                "cniVersion": "0.3.0",
                "plugins": [
                {
                "type": "calico",
                "log_level": "info",
                "datastore_type": "kubernetes",
                "nodename": "__KUBERNETES_NODE_NAME__",
                "ipam": {
                "type": "host-local",
                "subnet": "usePodCidr"
                },
                "policy": {
                "type": "k8s"
                },
                "kubernetes": {
                "kubeconfig": "__KUBECONFIG_FILEPATH__"
                }
                },
                {
                "type": "portmap",
                "snat": true,
                "capabilities": {"portMappings": true}
                }
                ]
                }


                # Flannel network configuration. Mounted into the flannel container.
                net-conf.json: |
                {
                "Network": "10.244.0.0/16",
                "Backend": {
                "Type": "vxlan"
                }
                }


                ---






                # This manifest installs the calico/node container, as well
                # as the Calico CNI plugins and network config on
                # each master and worker node in a Kubernetes cluster.
                kind: DaemonSet
                apiVersion: apps/v1
                metadata:
                name: canal
                namespace: kube-system
                labels:
                k8s-app: canal
                spec:
                selector:
                matchLabels:
                k8s-app: canal
                updateStrategy:
                type: RollingUpdate
                rollingUpdate:
                maxUnavailable: 1
                template:
                metadata:
                labels:
                k8s-app: canal
                annotations:
                # This, along with the CriticalAddonsOnly toleration below,
                # marks the pod as a critical add-on, ensuring it gets
                # priority scheduling and that its resources are reserved
                # if it ever gets evicted.
                scheduler.alpha.kubernetes.io/critical-pod: ''
                spec:
                nodeSelector:
                beta.kubernetes.io/os: linux
                hostNetwork: true
                tolerations:
                # Make sure canal gets scheduled on all nodes.
                - effect: NoSchedule
                operator: Exists
                # Mark the pod as a critical add-on for rescheduling.
                - key: CriticalAddonsOnly
                operator: Exists
                - effect: NoExecute
                operator: Exists
                serviceAccountName: canal
                # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
                # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
                terminationGracePeriodSeconds: 0
                containers:
                # Runs calico/node container on each Kubernetes node. This
                # container programs network policy and routes on each
                # host.
                - name: calico-node
                image: calico/node:v3.3.7
                env:
                # Use Kubernetes API as the backing datastore.
                - name: DATASTORE_TYPE
                value: "kubernetes"
                # Wait for the datastore.
                - name: WAIT_FOR_DATASTORE
                value: "true"
                # Set based on the k8s node name.
                - name: NODENAME
                valueFrom:
                fieldRef:
                fieldPath: spec.nodeName
                # Don't enable BGP.
                - name: CALICO_NETWORKING_BACKEND
                value: "none"
                # Cluster type to identify the deployment type
                - name: CLUSTER_TYPE
                value: "k8s,canal"
                # Period, in seconds, at which felix re-applies all iptables state
                - name: FELIX_IPTABLESREFRESHINTERVAL
                value: "60"
                # No IP address needed.
                - name: IP
                value: ""
                # The default IPv4 pool to create on startup if none exists. Pod IPs will be
                # chosen from this range. Changing this value after installation will have
                # no effect. This should fall within `--cluster-cidr`.
                - name: CALICO_IPV4POOL_CIDR
                value: "192.168.0.0/16"
                # Disable file logging so `kubectl logs` works.
                - name: CALICO_DISABLE_FILE_LOGGING
                value: "true"
                # Set Felix endpoint to host default action to ACCEPT.
                - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
                value: "ACCEPT"
                # Disable IPv6 on Kubernetes.
                - name: FELIX_IPV6SUPPORT
                value: "false"
                # Set Felix logging to "info"
                - name: FELIX_LOGSEVERITYSCREEN
                value: "info"
                - name: FELIX_HEALTHENABLED
                value: "true"
                securityContext:
                privileged: true
                resources:
                requests:
                cpu: 250m
                livenessProbe:
                httpGet:
                path: liveness
                port: 9099
                host: localhost
                periodSeconds: 10
                initialDelaySeconds: 10
                failureThreshold: 6
                readinessProbe:
                httpGet:
                path: readiness
                port: 9099
                host: localhost
                periodSeconds: 10
                volumeMounts:
                - mountPath: lib/modules
                name: lib-modules
                readOnly: true
                - mountPath: run/xtables.lock
                name: xtables-lock
                readOnly: false
                - mountPath: var/run/calico
                name: var-run-calico
                readOnly: false
                - mountPath: var/lib/calico
                name: var-lib-calico
                readOnly: false
                # This container installs the Calico CNI binaries
                # and CNI network config file on each node.
                - name: install-cni
                image: calico/cni:v3.3.7
                command: ["/install-cni.sh"]
                env:
                # Name of the CNI config file to create.
                - name: CNI_CONF_NAME
                value: "10-canal.conflist"
                # Set the hostname based on the k8s node name.
                - name: KUBERNETES_NODE_NAME
                valueFrom:
                fieldRef:
                fieldPath: spec.nodeName
                # The CNI network config to install on each node.
                - name: CNI_NETWORK_CONFIG
                valueFrom:
                configMapKeyRef:
                name: canal-config
                key: cni_network_config
                volumeMounts:
                - mountPath: host/opt/cni/bin
                name: cni-bin-dir
                - mountPath: host/etc/cni/net.d
                name: cni-net-dir
                # This container runs flannel using the kube-subnet-mgr backend
                # for allocating subnets.
                - name: kube-flannel
                image: quay.io/coreos/flannel:v0.9.1
                command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
                securityContext:
                privileged: true
                env:
                - name: POD_NAME
                valueFrom:
                fieldRef:
                fieldPath: metadata.name
                - name: POD_NAMESPACE
                valueFrom:
                fieldRef:
                fieldPath: metadata.namespace
                - name: FLANNELD_IFACE
                valueFrom:
                configMapKeyRef:
                name: canal-config
                key: canal_iface
                - name: FLANNELD_IP_MASQ
                valueFrom:
                configMapKeyRef:
                name: canal-config
                key: masquerade
                volumeMounts:
                - mountPath: run/xtables.lock
                name: xtables-lock
                readOnly: false
                - name: flannel-cfg
                mountPath: etc/kube-flannel/
                volumes:
                # Used by calico/node.
                - name: lib-modules
                hostPath:
                path: lib/modules
                - name: var-run-calico
                hostPath:
                path: var/run/calico
                - name: var-lib-calico
                hostPath:
                path: var/lib/calico
                - name: xtables-lock
                hostPath:
                path: run/xtables.lock
                type: FileOrCreate
                # Used by flannel.
                - name: flannel-cfg
                configMap:
                name: canal-config
                # Used to install CNI.
                - name: cni-bin-dir
                hostPath:
                path: opt/cni/bin
                - name: cni-net-dir
                hostPath:
                path: etc/cni/net.d
                ---


                apiVersion: v1
                kind: ServiceAccount
                metadata:
                name: canal
                namespace: kube-system


                ---


                # Create all the CustomResourceDefinitions needed for
                # Calico policy and networking mode.


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: felixconfigurations.crd.projectcalico.org
                spec:
                scope: Cluster
                group: crd.projectcalico.org
                version: v1
                names:
                kind: FelixConfiguration
                plural: felixconfigurations
                singular: felixconfiguration
                ---


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: bgpconfigurations.crd.projectcalico.org
                spec:
                scope: Cluster
                group: crd.projectcalico.org
                version: v1
                names:
                kind: BGPConfiguration
                plural: bgpconfigurations
                singular: bgpconfiguration


                ---


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: ippools.crd.projectcalico.org
                spec:
                scope: Cluster
                group: crd.projectcalico.org
                version: v1
                names:
                kind: IPPool
                plural: ippools
                singular: ippool


                ---


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: hostendpoints.crd.projectcalico.org
                spec:
                scope: Cluster
                group: crd.projectcalico.org
                version: v1
                names:
                kind: HostEndpoint
                plural: hostendpoints
                singular: hostendpoint


                ---


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: clusterinformations.crd.projectcalico.org
                spec:
                scope: Cluster
                group: crd.projectcalico.org
                version: v1
                names:
                kind: ClusterInformation
                plural: clusterinformations
                singular: clusterinformation


                ---


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: globalnetworkpolicies.crd.projectcalico.org
                spec:
                scope: Cluster
                group: crd.projectcalico.org
                version: v1
                names:
                kind: GlobalNetworkPolicy
                plural: globalnetworkpolicies
                singular: globalnetworkpolicy


                ---


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: globalnetworksets.crd.projectcalico.org
                spec:
                scope: Cluster
                group: crd.projectcalico.org
                version: v1
                names:
                kind: GlobalNetworkSet
                plural: globalnetworksets
                singular: globalnetworkset


                ---


                apiVersion: apiextensions.k8s.io/v1beta1
                kind: CustomResourceDefinition
                metadata:
                name: networkpolicies.crd.projectcalico.org
                spec:
                scope: Namespaced
                group: crd.projectcalico.org
                version: v1
                names:
                kind: NetworkPolicy
                plural: networkpolicies
                singular: networkpolicy

                2.3 利用Canal管控入站流量 

                • 设置默认Ingress策略

                  [root@k8s-master-01 cni]# cat  ingress-deny.yaml 
                  apiVersion: v1
                  kind: Namespace
                  metadata:
                  name: networkpolicy-test
                  ---
                  apiVersion: networking.k8s.io/v1
                  kind: NetworkPolicy
                  metadata:
                  name: deny-all-ingress
                  namespace: networkpolicy-test
                  spec:
                  podSelector: {}
                  policyTypes: [ "Ingress" ]
                  [root@k8s-master-01 cni]# kubectl get networkpolicy -n networkpolicy-test
                  NAME POD-SELECTOR AGE
                  deny-all-ingress <none> 3m44s
                  #policytypes字段指明要生效的ingress类型的规则,但是未定义任何ingress字段,因此不能匹配任何源端点,因此拒绝任何入站流量;
                    [root@k8s-master-01 cni]# cat  ingress-allow.yaml 
                    apiVersion: networking.k8s.io/v1
                    kind: NetworkPolicy
                    metadata:
                    name: deny-allow-ingress
                    namespace: networkpolicy-test
                    spec:
                    podSelector: {}
                    policyTypes: [ "Ingress" ]
                    ingress:
                    - {}
                    #ingress字段为空以匹配所有源端点,因此外允许所有流量,本身默认的规则即为允许所有,无需定义;
                    • 放行特定入站流量

                        (A)仅定义from字段时将隐含本地所有Podcast资源的所有端口;

                        (B)仅定义Ports字段时将隐含所有的源端点(地址);

                        (C)同时定义from和ports字段时表示逻辑与关系;

                      (D)from字段为一个对象列表(ipBlock、namespaceSelector、podSelector、ports、port、protocol),彼此之间为逻辑与关系;

                      [root@k8s-master-01 cni]# cat   allow-myapp-ingress.yaml 
                      apiVersion: networking.k8s.io/v1
                      kind: NetworkPolicy
                      metadata:
                      name: allow-myapp-ingress
                      namespace: networkpolicy-test
                      spec:
                      podSelector:
                      matchLabels:
                      app: myapp
                      policyTypes: [ "Ingress" ]
                      ingress:
                      - from:
                      - ipBlock:
                      cidr: 10.244.0.0/16
                      except:
                      - 10.244.3.0/24
                      - podSelector:
                      matchLabels:
                      app: myapp
                      ports:
                      - protocol: TCP
                            port: 80
                      #将标签app=myapp的pod资源的80/TCP端口开放给10.244.0.0/16网络内除了10.244.3.0.24子网中的所有端点,以及名称空间中拥有标签app=myapp的所有pod资源访问;


                      2.4 利用Canal管控出站流量 

                      • 设置默认egress策略

                        [root@k8s-master-01 cni]# cat egress-deny.yaml 
                        apiVersion: networking.k8s.io/v1
                        kind: NetworkPolicy
                        metadata:
                        name: deny-all-egress
                        namespace: networkpolicy-test
                        spec:
                        podSelector: {}
                        policyTypes: [ "Egress" ]
                        #policytypes字段指明要生效的egress类型的规则,但是未定义任何egress字段,因此不能匹配任何源端点,因此拒绝任何出站流量;
                        • 放行特定出站流量

                          [root@k8s-master-01 cni]# cat allow-myapp-egress.yaml 
                          apiVersion: networking.k8s.io/v1
                          kind: NetworkPolicy
                          metadata:
                          name: allow-tomcat-egress
                          namespace: networkpolicy-test
                          spec:
                          podSelector:
                          matchLabels:
                          app: tomcat
                          policyTypes: [ "Egress" ]
                          egress:
                          - to:
                          - podSelector:
                          matchLabels:
                          app: nginx
                          ports:
                          - protocol: TCP
                          port: 80
                          - to:
                          - podSelector:
                          matchLabels:
                          app: mysql
                          ports:
                          - protocol: TCP
                          port: 3306
                          #对来自"app=tomcat"标签的Pod对象的到达标签为"app=nginx"的Pod对象的80端口,以及到达标签为"app=mysql"的Pod对象的3306端口的流量给与放行;

                          2.5 利用Canal隔离名称空间

                            [root@k8s-master-01 cni]# cat namespace-isolated.yaml 
                            apiVersion: networking.k8s.io/v1
                            kind: NetworkPolicy
                            metadata:
                            name: namespace-deny-all
                            namespace: default
                            spec:
                            podSelector: {}
                            policyTypes: [ "Ingress","Egress" ]
                            ---
                            apiVersion: networking.k8s.io/v1
                            kind: NetworkPolicy
                            metadata:
                            name: namespace-isolated
                            namespace: default
                            spec:
                            podSelector: {}
                            policyTypes: [ "Ingress","Egress" ]
                            ingress:
                            - from:
                            - namespaceSelector:
                            matchExpressions:
                            - key: name
                            operator: In
                            values: [ "dafault","kube-system" ]
                            egress:
                            - to:
                            - namespaceSelector:
                            matchExpressions:
                            - key: name
                            operator: In
                            values: [ "dafault","kube-system" ]
                            [root@k8s-master-01 cni]# kubectl get networkpolicy
                            NAME POD-SELECTOR AGE
                            namespace-deny-all <none> 43s
                            namespace-isolated <none> 43s

                            2.6 Canal网络策略应用案例

                            1)my-app Pod仅允许来自nginx Pod的流量访问其80/TCP端口,但可以向nginx Pod的所有端口发出出站流量。

                            2)nginx Pod允许任何源端点对其80/TCP端口的访问,并能向任意端点发出出站流量。

                            3)myapp Pod和nginx Pod都可以与kube-system名称空间的任意Pod进行任何类型的通信,以便于使用kube-dns提供的名称解析服务等。

                            • 创建测试客户端

                              [root@k8s-master-01 ~]# kubectl run cirrors-$RANDOM --namespace=default --rm -it --image=cirros -- sh
                              • 创建pod和svc对象nginx与myapp

                                [root@k8s-master-01 cni]# kubectl run nginx --image=nginx:alpine --replicas=1 --namespace=test-networkpolicy --port=80 --expose --labels app=nginx
                                service/nginx created
                                pod/nginx created
                                [root@k8s-master-01 cni]# kubectl get svc -n test-networkpolicy -o wide -L app
                                NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR APP
                                nginx ClusterIP 10.108.53.166 <none> 80/TCP 5m33s app=nginx nginx
                                #通过svc暴露服务在本例子中并不是必须的,下文尝试使用pod来测试;
                                  / # curl  -I nginx.test-networkpolicy
                                  HTTP/1.1 200 OK
                                  Server: nginx/1.19.2
                                  Date: Fri, 30 Oct 2020 09:08:01 GMT
                                  Content-Type: text/html
                                  Content-Length: 612
                                  Last-Modified: Tue, 11 Aug 2020 15:16:45 GMT
                                  Connection: keep-alive
                                  ETag: "5f32b65d-264"
                                  Accept-Ranges: bytes
                                  #当创建svc时便可通过svc相关的名称进行访问;
                                  • 创建pod对象nginx与myapp

                                    [root@k8s-master-01 cni]# kubectl get pod -n test-networkpolicy -o wide -L app
                                    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES APP
                                    myapp-5cbd66595b-ls856 1/1 Running 0 6m12s 10.244.3.7 k8s-worker-02 <none> <none> myapp
                                    nginx 1/1 Running 0 4m36s 10.244.3.8 k8s-worker-02 <none> <none> nginx
                                    nginx-96dfd7479-g99z7 1/1 Running 0 6m12s 10.244.1.9 k8s-worker-01 <none> <none> nginx
                                      [root@k8s-master-01 cni]# cat test.yaml 
                                      apiVersion: v1
                                      kind: Namespace
                                      metadata:
                                      name: test-networkpolicy
                                      ---
                                      apiVersion: apps/v1
                                      kind: Deployment
                                      metadata:
                                      name: nginx
                                      namespace: test-networkpolicy
                                      spec:
                                      replicas: 1
                                      selector:
                                      matchLabels:
                                      app: nginx
                                      template:
                                      metadata:
                                      labels:
                                      app: nginx
                                      spec:
                                      containers:
                                      - name: nginx
                                      image: nginx:alpine
                                      ports:
                                      - containerPort: 80
                                      name: http
                                      ---
                                      apiVersion: apps/v1
                                      kind: Deployment
                                      metadata:
                                      name: myapp
                                      namespace: test-networkpolicy
                                      spec:
                                      replicas: 1
                                      selector:
                                      matchLabels:
                                      app: myapp
                                      template:
                                      metadata:
                                      labels:
                                      app: myapp
                                      spec:
                                      containers:
                                      - name: myapp
                                      image: ikubernetes/myapp:v1
                                      ports:
                                      - containerPort: 80
                                      name: http


                                        / # curl  -I 10.244.1.9
                                        HTTP/1.1 200 OK
                                        Server: nginx/1.12.2
                                        Date: Fri, 30 Oct 2020 09:04:22 GMT
                                        Content-Type: text/html
                                        Content-Length: 65
                                        Last-Modified: Fri, 02 Mar 2018 03:39:12 GMT
                                        Connection: keep-alive
                                        ETag: "5a98c760-41"
                                        Accept-Ranges: bytes
                                        • 设置默认策略为全拒绝

                                          apiVersion: networking.k8s.io/v1
                                          kind: NetworkPolicy
                                          metadata:
                                          name: deny-all-traffic
                                          namespace: test-networkpolicy
                                          spec:
                                          podSelector: {}
                                          policyTypes:
                                          - Ingress
                                            - Egress
                                            / # curl  -I nginx.test-networkpolicy
                                            curl: (7) Failed to connect to nginx.test-networkpolicy port 80: Connection timed out
                                            / # curl  -I 10.244.3.7
                                            curl: (7) Failed to connect to 10.244.3.5 port 80: Connection timed out
                                            • 放行nginx Pod之上的80/TCP和nginx Pod与kube-system之间的所有流量

                                              [root@k8s-master-01 cni]# kubectl label namespace kube-system ns=kube-system
                                              #为了放行default namespace现在为default namespace打标签,便于引用;
                                                [root@k8s-master-01 cni]# cat test-allow-nginx.yaml 
                                                apiVersion: networking.k8s.io/v1
                                                kind: NetworkPolicy
                                                metadata:
                                                name: nginx-allow-all
                                                namespace: test-networkpolicy
                                                spec:
                                                podSelector:
                                                matchLabels:
                                                app: nginx
                                                ingress:
                                                - ports:
                                                - port: 80
                                                - from:
                                                - namespaceSelector:
                                                matchLabels:
                                                ns: kube-system
                                                egress:
                                                - to:
                                                policyTypes:
                                                - Ingress
                                                - Egress
                                                #经测试- ports与- from为或的关系,因为default namespace中的Pod 对象cirrors-28721可以与nginx Pod通信
                                                  / # curl -I 10.244.1.9
                                                  HTTP/1.1 200 OK
                                                  Server: nginx/1.19.2
                                                  Date: Sat, 31 Oct 2020 01:00:45 GMT
                                                  Content-Type: text/html
                                                  Content-Length: 612
                                                  Last-Modified: Tue, 11 Aug 2020 15:16:45 GMT
                                                  Connection: keep-alive
                                                  ETag: "5f32b65d-264"
                                                  Accept-Ranges: bytes


                                                  / # curl -I 10.244.3.7
                                                  curl: (7) Failed to connect to 10.244.3.7 port 80: Connection timed out
                                                  #通过测试,只有nginx的80/TCP可以连接,myapp的80/TCP拒绝了请求;
                                                  • 放行test-networkpolicy名称空间来自nginx Pod对象发往myapp Pod的80/TCP的访问请求以及与kube-system之间的任何请求

                                                    [root@k8s-master-01 cni]# cat test-allow-nginx-between-myapp.yaml
                                                    apiVersion: networking.k8s.io/v1
                                                    kind: NetworkPolicy
                                                    metadata:
                                                    name: allow-nginx-between-myapp
                                                    namespace: test-networkpolicy
                                                    spec:
                                                    podSelector:
                                                    matchLabels:
                                                    app: myapp
                                                    ingress:
                                                    - from:
                                                    - namespaceSelector:
                                                    matchLabels:
                                                    ns: kube-system
                                                    - from:
                                                    - podSelector:
                                                    matchLabels:
                                                    app: nginx
                                                    ports:
                                                    - port: 80
                                                    egress:
                                                    - to:
                                                    - namespaceSelector:
                                                    matchLabels:
                                                    ns: kube-system
                                                    - to:
                                                    - podSelector:
                                                    matchLabels:
                                                    app: nginx
                                                    policyTypes:
                                                    - Ingress
                                                    - Egres
                                                      / # curl -I 10.244.3.7
                                                      curl: (7) Failed to connect to 10.244.3.7 port 80: Connection timed out
                                                      #使用default namespace的Pod测试,无法连接;
                                                        [root@k8s-master-01 ~]# kubectl exec -it nginx -n test-networkpolicy -- bin/sh
                                                        / # curl -I 10.244.3.7
                                                        HTTP/1.1 200 OK
                                                        Server: nginx/1.12.2
                                                        Date: Sat, 31 Oct 2020 01:24:39 GMT
                                                        Content-Type: text/html
                                                        Content-Length: 65
                                                        Last-Modified: Fri, 02 Mar 2018 03:39:12 GMT
                                                        Connection: keep-alive
                                                        ETag: "5a98c760-41"
                                                        Accept-Ranges: bytes
                                                        #使用test-networkpolicy namespace中的app=nginx svc对应的backend nginx pod测试成功;
                                                        #本次使用test-networkpolicy namespace中的nginx-96dfd7479-g99z7未测试成功;

                                                        3、Calico插件

                                                        • Calico是一个基于BGP的三层网络插件,并且支持网络策略实现网络控制,它在每台主机上运行一个vRouter并为其分配一个IP,利用linux内核转发网络数据包,借助iptables实现防火墙功能。

                                                        • 需要注意的是Calico分配地址池应与kubernetes集群的--pod-network-cidr的值保持一致。

                                                        • 由于每个Pod都有达到各子网的路由,因此每个工作负载可以直接配置公网IP接入互联网;

                                                        • Calico的IPIP包头很小,效率高于VxLAN,在大规模集群中需要使用分布式BGP反射器模型;

                                                        3.1 部署Calico提供网络服务和网络策略

                                                            注意:本示例中集群未部署其他任何网络插件,在初始化集群时务必不能部署flannel网络插件;

                                                        • 部署rbac资源

                                                          https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
                                                            # Calico Version v3.3.7
                                                            # https://docs.projectcalico.org/v3.3/releases#v3.3.7
                                                            kind: ClusterRole
                                                            apiVersion: rbac.authorization.k8s.io/v1beta1
                                                            metadata:
                                                            name: calico-node
                                                            rules:
                                                            - apiGroups: [""]
                                                            resources:
                                                            - namespaces
                                                            - serviceaccounts
                                                            verbs:
                                                            - get
                                                            - list
                                                            - watch
                                                            - apiGroups: [""]
                                                            resources:
                                                            - pods/status
                                                            verbs:
                                                            - patch
                                                            - apiGroups: [""]
                                                            resources:
                                                            - pods
                                                            verbs:
                                                            - get
                                                            - list
                                                            - watch
                                                            - apiGroups: [""]
                                                            resources:
                                                            - services
                                                            verbs:
                                                            - get
                                                            - apiGroups: [""]
                                                            resources:
                                                            - endpoints
                                                            verbs:
                                                            - get
                                                            - apiGroups: [""]
                                                            resources:
                                                            - nodes
                                                            verbs:
                                                            - get
                                                            - list
                                                            - update
                                                            - watch
                                                            - apiGroups: ["extensions"]
                                                            resources:
                                                            - networkpolicies
                                                            verbs:
                                                            - get
                                                            - list
                                                            - watch
                                                            - apiGroups: ["networking.k8s.io"]
                                                            resources:
                                                            - networkpolicies
                                                            verbs:
                                                            - watch
                                                            - list
                                                            - apiGroups: ["crd.projectcalico.org"]
                                                            resources:
                                                            - globalfelixconfigs
                                                            - felixconfigurations
                                                            - bgppeers
                                                            - globalbgpconfigs
                                                            - bgpconfigurations
                                                            - ippools
                                                            - globalnetworkpolicies
                                                            - globalnetworksets
                                                            - networkpolicies
                                                            - clusterinformations
                                                            - hostendpoints
                                                            verbs:
                                                            - create
                                                            - get
                                                            - list
                                                            - update
                                                            - watch


                                                            ---


                                                            apiVersion: rbac.authorization.k8s.io/v1beta1
                                                            kind: ClusterRoleBinding
                                                            metadata:
                                                            name: calico-node
                                                            roleRef:
                                                            apiGroup: rbac.authorization.k8s.io
                                                            kind: ClusterRole
                                                            name: calico-node
                                                            subjects:
                                                            - kind: ServiceAccount
                                                            name: calico-node
                                                            namespace: kube-system
                                                            • 部署Calico插件

                                                              https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

                                                                # Calico Version v3.3.7
                                                                # https://docs.projectcalico.org/v3.3/releases#v3.3.7
                                                                # This manifest includes the following component versions:
                                                                # calico/node:v3.3.7
                                                                # calico/cni:v3.3.7


                                                                # This ConfigMap is used to configure a self-hosted Calico installation.
                                                                kind: ConfigMap
                                                                apiVersion: v1
                                                                metadata:
                                                                name: calico-config
                                                                namespace: kube-system
                                                                data:
                                                                # To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas
                                                                # below. We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is
                                                                # essential.
                                                                typha_service_name: "none"
                                                                # Configure the Calico backend to use.
                                                                calico_backend: "bird"


                                                                # Configure the MTU to use
                                                                veth_mtu: "1440"


                                                                # The CNI network configuration to install on each node. The special
                                                                # values in this config will be automatically populated.
                                                                cni_network_config: |-
                                                                {
                                                                "name": "k8s-pod-network",
                                                                "cniVersion": "0.3.0",
                                                                "plugins": [
                                                                {
                                                                "type": "calico",
                                                                "log_level": "info",
                                                                "datastore_type": "kubernetes",
                                                                "nodename": "__KUBERNETES_NODE_NAME__",
                                                                "mtu": __CNI_MTU__,
                                                                "ipam": {
                                                                "type": "host-local",
                                                                "subnet": "usePodCidr"
                                                                },
                                                                "policy": {
                                                                "type": "k8s"
                                                                },
                                                                "kubernetes": {
                                                                "kubeconfig": "__KUBECONFIG_FILEPATH__"
                                                                }
                                                                },
                                                                {
                                                                "type": "portmap",
                                                                "snat": true,
                                                                "capabilities": {"portMappings": true}
                                                                }
                                                                ]
                                                                }


                                                                ---




                                                                # This manifest creates a Service, which will be backed by Calico's Typha daemon.
                                                                # Typha sits in between Felix and the API server, reducing Calico's load on the API server.


                                                                apiVersion: v1
                                                                kind: Service
                                                                metadata:
                                                                name: calico-typha
                                                                namespace: kube-system
                                                                labels:
                                                                k8s-app: calico-typha
                                                                spec:
                                                                ports:
                                                                - port: 5473
                                                                protocol: TCP
                                                                targetPort: calico-typha
                                                                name: calico-typha
                                                                selector:
                                                                k8s-app: calico-typha


                                                                ---


                                                                # This manifest creates a Deployment of Typha to back the above service.


                                                                apiVersion: apps/v1beta1
                                                                kind: Deployment
                                                                metadata:
                                                                name: calico-typha
                                                                namespace: kube-system
                                                                labels:
                                                                k8s-app: calico-typha
                                                                spec:
                                                                # Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the
                                                                # typha_service_name variable in the calico-config ConfigMap above.
                                                                #
                                                                # We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential
                                                                # (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In
                                                                # production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade.
                                                                replicas: 0
                                                                revisionHistoryLimit: 2
                                                                template:
                                                                metadata:
                                                                labels:
                                                                k8s-app: calico-typha
                                                                annotations:
                                                                # This, along with the CriticalAddonsOnly toleration below, marks the pod as a critical
                                                                # add-on, ensuring it gets priority scheduling and that its resources are reserved
                                                                # if it ever gets evicted.
                                                                scheduler.alpha.kubernetes.io/critical-pod: ''
                                                                cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'
                                                                spec:
                                                                nodeSelector:
                                                                beta.kubernetes.io/os: linux
                                                                hostNetwork: true
                                                                tolerations:
                                                                # Mark the pod as a critical add-on for rescheduling.
                                                                - key: CriticalAddonsOnly
                                                                operator: Exists
                                                                # Since Calico can't network a pod until Typha is up, we need to run Typha itself
                                                                # as a host-networked pod.
                                                                serviceAccountName: calico-node
                                                                containers:
                                                                - image: calico/typha:v3.3.7
                                                                name: calico-typha
                                                                ports:
                                                                - containerPort: 5473
                                                                name: calico-typha
                                                                protocol: TCP
                                                                env:
                                                                # Enable "info" logging by default. Can be set to "debug" to increase verbosity.
                                                                - name: TYPHA_LOGSEVERITYSCREEN
                                                                value: "info"
                                                                # Disable logging to file and syslog since those don't make sense in Kubernetes.
                                                                - name: TYPHA_LOGFILEPATH
                                                                value: "none"
                                                                - name: TYPHA_LOGSEVERITYSYS
                                                                value: "none"
                                                                # Monitor the Kubernetes API to find the number of running instances and rebalance
                                                                # connections.
                                                                - name: TYPHA_CONNECTIONREBALANCINGMODE
                                                                value: "kubernetes"
                                                                - name: TYPHA_DATASTORETYPE
                                                                value: "kubernetes"
                                                                - name: TYPHA_HEALTHENABLED
                                                                value: "true"
                                                                # Uncomment these lines to enable prometheus metrics. Since Typha is host-networked,
                                                                # this opens a port on the host, which may need to be secured.
                                                                #- name: TYPHA_PROMETHEUSMETRICSENABLED
                                                                # value: "true"
                                                                #- name: TYPHA_PROMETHEUSMETRICSPORT
                                                                # value: "9093"
                                                                livenessProbe:
                                                                exec:
                                                                command:
                                                                - calico-typha
                                                                - check
                                                                - liveness
                                                                periodSeconds: 30
                                                                initialDelaySeconds: 30
                                                                readinessProbe:
                                                                exec:
                                                                command:
                                                                - calico-typha
                                                                - check
                                                                - readiness
                                                                periodSeconds: 10


                                                                ---


                                                                # This manifest creates a Pod Disruption Budget for Typha to allow K8s Cluster Autoscaler to evict


                                                                apiVersion: policy/v1beta1
                                                                kind: PodDisruptionBudget
                                                                metadata:
                                                                name: calico-typha
                                                                namespace: kube-system
                                                                labels:
                                                                k8s-app: calico-typha
                                                                spec:
                                                                maxUnavailable: 1
                                                                selector:
                                                                matchLabels:
                                                                k8s-app: calico-typha


                                                                ---


                                                                # This manifest installs the calico/node container, as well
                                                                # as the Calico CNI plugins and network config on
                                                                # each master and worker node in a Kubernetes cluster.
                                                                kind: DaemonSet
                                                                apiVersion: extensions/v1beta1
                                                                metadata:
                                                                name: calico-node
                                                                namespace: kube-system
                                                                labels:
                                                                k8s-app: calico-node
                                                                spec:
                                                                selector:
                                                                matchLabels:
                                                                k8s-app: calico-node
                                                                updateStrategy:
                                                                type: RollingUpdate
                                                                rollingUpdate:
                                                                maxUnavailable: 1
                                                                template:
                                                                metadata:
                                                                labels:
                                                                k8s-app: calico-node
                                                                annotations:
                                                                # This, along with the CriticalAddonsOnly toleration below,
                                                                # marks the pod as a critical add-on, ensuring it gets
                                                                # priority scheduling and that its resources are reserved
                                                                # if it ever gets evicted.
                                                                scheduler.alpha.kubernetes.io/critical-pod: ''
                                                                spec:
                                                                nodeSelector:
                                                                beta.kubernetes.io/os: linux
                                                                hostNetwork: true
                                                                tolerations:
                                                                # Make sure calico-node gets scheduled on all nodes.
                                                                - effect: NoSchedule
                                                                operator: Exists
                                                                # Mark the pod as a critical add-on for rescheduling.
                                                                - key: CriticalAddonsOnly
                                                                operator: Exists
                                                                - effect: NoExecute
                                                                operator: Exists
                                                                serviceAccountName: calico-node
                                                                # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
                                                                # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
                                                                terminationGracePeriodSeconds: 0
                                                                containers:
                                                                # Runs calico/node container on each Kubernetes node. This
                                                                # container programs network policy and routes on each
                                                                # host.
                                                                - name: calico-node
                                                                image: calico/node:v3.3.7
                                                                env:
                                                                # Use Kubernetes API as the backing datastore.
                                                                - name: DATASTORE_TYPE
                                                                value: "kubernetes"
                                                                # Typha support: controlled by the ConfigMap.
                                                                - name: FELIX_TYPHAK8SSERVICENAME
                                                                valueFrom:
                                                                configMapKeyRef:
                                                                name: calico-config
                                                                key: typha_service_name
                                                                # Wait for the datastore.
                                                                - name: WAIT_FOR_DATASTORE
                                                                value: "true"
                                                                # Set based on the k8s node name.
                                                                - name: NODENAME
                                                                valueFrom:
                                                                fieldRef:
                                                                fieldPath: spec.nodeName
                                                                # Choose the backend to use.
                                                                - name: CALICO_NETWORKING_BACKEND
                                                                valueFrom:
                                                                configMapKeyRef:
                                                                name: calico-config
                                                                key: calico_backend
                                                                # Cluster type to identify the deployment type
                                                                - name: CLUSTER_TYPE
                                                                value: "k8s,bgp"
                                                                # Auto-detect the BGP IP address.
                                                                - name: IP
                                                                value: "autodetect"
                                                                # Enable IPIP
                                                                - name: CALICO_IPV4POOL_IPIP
                                                                value: "Always"
                                                                # Set MTU for tunnel device used if ipip is enabled
                                                                - name: FELIX_IPINIPMTU
                                                                valueFrom:
                                                                configMapKeyRef:
                                                                name: calico-config
                                                                key: veth_mtu
                                                                # The default IPv4 pool to create on startup if none exists. Pod IPs will be
                                                                # chosen from this range. Changing this value after installation will have
                                                                # no effect. This should fall within `--cluster-cidr`.
                                                                - name: CALICO_IPV4POOL_CIDR
                                                                value: "192.168.0.0/16"
                                                                # Disable file logging so `kubectl logs` works.
                                                                - name: CALICO_DISABLE_FILE_LOGGING
                                                                value: "true"
                                                                # Set Felix endpoint to host default action to ACCEPT.
                                                                - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
                                                                value: "ACCEPT"
                                                                # Disable IPv6 on Kubernetes.
                                                                - name: FELIX_IPV6SUPPORT
                                                                value: "false"
                                                                # Set Felix logging to "info"
                                                                - name: FELIX_LOGSEVERITYSCREEN
                                                                value: "info"
                                                                - name: FELIX_HEALTHENABLED
                                                                value: "true"
                                                                securityContext:
                                                                privileged: true
                                                                resources:
                                                                requests:
                                                                cpu: 250m
                                                                livenessProbe:
                                                                httpGet:
                                                                path: /liveness
                                                                port: 9099
                                                                host: localhost
                                                                periodSeconds: 10
                                                                initialDelaySeconds: 10
                                                                failureThreshold: 6
                                                                readinessProbe:
                                                                exec:
                                                                command:
                                                                - /bin/calico-node
                                                                - -bird-ready
                                                                - -felix-ready
                                                                periodSeconds: 10
                                                                volumeMounts:
                                                                - mountPath: lib/modules
                                                                name: lib-modules
                                                                readOnly: true
                                                                - mountPath: run/xtables.lock
                                                                name: xtables-lock
                                                                readOnly: false
                                                                - mountPath: var/run/calico
                                                                name: var-run-calico
                                                                readOnly: false
                                                                - mountPath: var/lib/calico
                                                                name: var-lib-calico
                                                                readOnly: false
                                                                # This container installs the Calico CNI binaries
                                                                # and CNI network config file on each node.
                                                                - name: install-cni
                                                                image: calico/cni:v3.3.7
                                                                command: ["/install-cni.sh"]
                                                                env:
                                                                # Name of the CNI config file to create.
                                                                - name: CNI_CONF_NAME
                                                                value: "10-calico.conflist"
                                                                # Set the hostname based on the k8s node name.
                                                                - name: KUBERNETES_NODE_NAME
                                                                valueFrom:
                                                                fieldRef:
                                                                fieldPath: spec.nodeName
                                                                # The CNI network config to install on each node.
                                                                - name: CNI_NETWORK_CONFIG
                                                                valueFrom:
                                                                configMapKeyRef:
                                                                name: calico-config
                                                                key: cni_network_config
                                                                # CNI MTU Config variable
                                                                - name: CNI_MTU
                                                                valueFrom:
                                                                configMapKeyRef:
                                                                name: calico-config
                                                                key: veth_mtu
                                                                volumeMounts:
                                                                - mountPath: host/opt/cni/bin
                                                                name: cni-bin-dir
                                                                - mountPath: host/etc/cni/net.d
                                                                name: cni-net-dir
                                                                volumes:
                                                                # Used by calico/node.
                                                                - name: lib-modules
                                                                hostPath:
                                                                path: /lib/modules
                                                                - name: var-run-calico
                                                                hostPath:
                                                                path: /var/run/calico
                                                                - name: var-lib-calico
                                                                hostPath:
                                                                path: /var/lib/calico
                                                                - name: xtables-lock
                                                                hostPath:
                                                                path: /run/xtables.lock
                                                                type: FileOrCreate
                                                                # Used to install CNI.
                                                                - name: cni-bin-dir
                                                                hostPath:
                                                                path: /opt/cni/bin
                                                                - name: cni-net-dir
                                                                hostPath:
                                                                path: /etc/cni/net.d
                                                                ---


                                                                apiVersion: v1
                                                                kind: ServiceAccount
                                                                metadata:
                                                                name: calico-node
                                                                namespace: kube-system


                                                                ---


                                                                # Create all the CustomResourceDefinitions needed for
                                                                # Calico policy and networking mode.


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: felixconfigurations.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: FelixConfiguration
                                                                plural: felixconfigurations
                                                                singular: felixconfiguration
                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: bgppeers.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: BGPPeer
                                                                plural: bgppeers
                                                                singular: bgppeer


                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: bgpconfigurations.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: BGPConfiguration
                                                                plural: bgpconfigurations
                                                                singular: bgpconfiguration


                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: ippools.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: IPPool
                                                                plural: ippools
                                                                singular: ippool


                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: hostendpoints.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: HostEndpoint
                                                                plural: hostendpoints
                                                                singular: hostendpoint


                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: clusterinformations.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: ClusterInformation
                                                                plural: clusterinformations
                                                                singular: clusterinformation


                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: globalnetworkpolicies.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: GlobalNetworkPolicy
                                                                plural: globalnetworkpolicies
                                                                singular: globalnetworkpolicy


                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: globalnetworksets.crd.projectcalico.org
                                                                spec:
                                                                scope: Cluster
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: GlobalNetworkSet
                                                                plural: globalnetworksets
                                                                singular: globalnetworkset


                                                                ---


                                                                apiVersion: apiextensions.k8s.io/v1beta1
                                                                kind: CustomResourceDefinition
                                                                metadata:
                                                                name: networkpolicies.crd.projectcalico.org
                                                                spec:
                                                                scope: Namespaced
                                                                group: crd.projectcalico.org
                                                                version: v1
                                                                names:
                                                                kind: NetworkPolicy
                                                                plural: networkpolicies
                                                                singular: networkpolicy


                                                                • 查看Calico各服务状态

                                                                  [root@calico-master ~]# kubectl get nodes
                                                                  NAME STATUS ROLES AGE VERSION
                                                                  calico-master Ready master 13m v1.18.5
                                                                  calico-worker01 Ready <none> 8m41s v1.18.5
                                                                  calico-worker02 Ready <none> 10m v1.18.5
                                                                  [root@calico-master ~]# kubectl get pods -n kube-system
                                                                  NAME READY STATUS RESTARTS AGE
                                                                  calico-node-dtxw7 2/2 Running 0 7m15s
                                                                  calico-node-plmcr 2/2 Running 0 7m15s
                                                                  calico-node-qt6xm 2/2 Running 0 7m15s
                                                                  coredns-6f5c7bbdfb-8zf9l 1/1 Running 0 13m
                                                                  coredns-6f5c7bbdfb-pqqs4 1/1 Running 0 13m
                                                                  etcd-calico-master 1/1 Running 0 13m
                                                                  kube-apiserver-calico-master 1/1 Running 0 13m
                                                                  kube-controller-manager-calico-master 1/1 Running 0 13m
                                                                  kube-proxy-gblzj 1/1 Running 0 10m
                                                                  kube-proxy-pgbwm 1/1 Running 0 13m
                                                                  kube-proxy-tvw9z 1/1 Running 0 8m54s
                                                                  kube-scheduler-calico-master 1/1 Running 0 13m
                                                                  [root@calico-master ~]# kubectl get pods -l k8s-app=calico-node -o wide -n kube-system
                                                                  NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
                                                                  calico-node-dtxw7 2/2 Running 0 8m4s 172.17.61.32 calico-worker01 <none> <none>
                                                                  calico-node-plmcr 2/2 Running 0 8m4s 172.17.61.33 calico-worker02 <none> <none>
                                                                  calico-node-qt6xm   2/2     Running   0          8m4s   172.17.61.31   calico-master     <none>           <none>
                                                                  • calico网络模型

                                                                    • Calico会在在每个节点生成到达每个kubernetes集群节点上的Pod子网的路由信息;

                                                                    • 节点上每创建一个Podcast资源,都会由Calico自动创建一对虚拟以太网接口,其中一个留在Pod网络的名称空间,一个留在节点的根网络名称空间,以cali为前缀后跟随机字符;

                                                                    [root@calico-master ~]# ifconfig tunl0
                                                                    tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440
                                                                    inet 192.168.0.1 netmask 255.255.255.255
                                                                    tunnel txqueuelen 1000 (IPIP Tunnel)
                                                                    RX packets 0 bytes 0 (0.0 B)
                                                                    RX errors 0 dropped 0 overruns 0 frame 0
                                                                    TX packets 0 bytes 0 (0.0 B)
                                                                    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


                                                                    [root@calico-master ~]# route -n
                                                                    Kernel IP routing table
                                                                    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
                                                                    192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 *
                                                                    192.168.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali7607cae07ed
                                                                    192.168.0.3 0.0.0.0 255.255.255.255 UH 0 0 0 cali12205d7848a
                                                                    192.168.1.0 172.17.61.33 255.255.255.0 UG 0 0 0 tunl0
                                                                    192.168.2.0 172.17.61.32 255.255.255.0 UG 0 0 0 tunl0
                                                                    [root@calico-master ~]# ip route show
                                                                    blackhole 192.168.0.0/24 proto bird
                                                                    192.168.0.2 dev cali7607cae07ed scope link
                                                                    192.168.0.3 dev cali12205d7848a scope link
                                                                    192.168.1.0/24 via 172.17.61.33 dev tunl0 proto bird onlink
                                                                    192.168.2.0/24 via 172.17.61.32 dev tunl0 proto bird onlink


                                                                    [root@calico-worker01 ~]# ifconfig tunl0
                                                                    tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440
                                                                    inet 192.168.2.1 netmask 255.255.255.255
                                                                    tunnel txqueuelen 1000 (IPIP Tunnel)
                                                                    RX packets 313 bytes 26292 (25.6 KiB)
                                                                    RX errors 0 dropped 0 overruns 0 frame 0
                                                                    TX packets 313 bytes 26292 (25.6 KiB)
                                                                    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
                                                                    [root@calico-worker01 ~]# ifconfig calia5c8a550e8e@if4
                                                                    calia5c8a550e8e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
                                                                    ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
                                                                    RX packets 0 bytes 0 (0.0 B)
                                                                    RX errors 0 dropped 0 overruns 0 frame 0
                                                                    TX packets 0 bytes 0 (0.0 B)
                                                                    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
                                                                    [root@calico-worker01 ~]# route -n
                                                                    Kernel IP routing table
                                                                    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
                                                                    192.168.0.0 172.17.61.31 255.255.255.0 UG 0 0 0 tunl0
                                                                    192.168.1.0 172.17.61.33 255.255.255.0 UG 0 0 0 tunl0
                                                                    192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 *
                                                                    192.168.2.2 0.0.0.0 255.255.255.255 UH 0 0 0 calia5c8a550e8e


                                                                    [root@calico-worker02 ~]# ifconfig tunl0
                                                                    tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440
                                                                    inet 192.168.1.1 netmask 255.255.255.255
                                                                    tunnel txqueuelen 1000 (IPIP Tunnel)
                                                                    RX packets 313 bytes 26292 (25.6 KiB)
                                                                    RX errors 0 dropped 0 overruns 0 frame 0
                                                                    TX packets 313 bytes 26292 (25.6 KiB)
                                                                            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
                                                                    [root@calico-worker02 ~]# ifconfig cali6b60c0e802b@if4
                                                                    cali6b60c0e802b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
                                                                    ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
                                                                    RX packets 0 bytes 0 (0.0 B)
                                                                    RX errors 0 dropped 0 overruns 0 frame 0
                                                                    TX packets 0 bytes 0 (0.0 B)
                                                                    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
                                                                    [root@calico-worker02 ~]# route -n
                                                                    Kernel IP routing table
                                                                    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
                                                                    192.168.0.0 172.17.61.31 255.255.255.0 UG 0 0 0 tunl0
                                                                    192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 *
                                                                    192.168.1.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali6b60c0e802b
                                                                    192.168.2.0 172.17.61.32 255.255.255.0 UG 0 0 0 tunl0
                                                                    #由此可以看出,Calico会在在每个节点生成到达每个kubernetes集群节点上的Pod子网的路由信息;
                                                                      [root@calico-master ~]# kubectl get  pods -o wide
                                                                      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
                                                                      mydeployment-5cbd66595b-qlqgd 1/1 Running 0 14m 192.168.2.2 calico-worker01 <none> <none>
                                                                      mydeployment-5cbd66595b-wjfgp 1/1 Running 0 14m 192.168.1.2 calico-worker02 <none> <none>
                                                                        [root@calico-master ~]# kubectl exec mydeployment-5cbd66595b-qlqgd -it /bin/sh
                                                                        / # ping 192.168.1.2
                                                                        PING 192.168.1.2 (192.168.1.2): 56 data bytes
                                                                        64 bytes from 192.168.1.2: seq=0 ttl=62 time=0.786 ms
                                                                        64 bytes from 192.168.1.2: seq=1 ttl=62 time=0.508 ms
                                                                        64 bytes from 192.168.1.2: seq=2 ttl=62 time=0.510 ms
                                                                        root@calico-worker01 ~]# tcpdump -i eth1 -nn host 172.17.61.32 and host 172.17.61.33
                                                                        tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
                                                                        listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
                                                                        14:38:02.599748 IP 172.17.61.32 > 172.17.61.33: IP 192.168.2.2 > 192.168.1.2: ICMP echo request, id 3072, seq 80, length 64 (ipip-proto-4)
                                                                        14:38:02.600099 IP 172.17.61.33 > 172.17.61.32: IP 192.168.1.2 > 192.168.2.2: ICMP echo reply, id 3072, seq 80, length 64 (ipip-proto-4)
                                                                        14:38:03.600130 IP 172.17.61.32 > 172.17.61.33: IP 192.168.2.2 > 192.168.1.2: ICMP echo request, id 3072, seq 81, length 64 (ipip-proto-4)
                                                                        14:38:03.600559 IP 172.17.61.33 > 172.17.61.32: IP 192.168.1.2 > 192.168.2.2: ICMP echo reply, id 3072, seq 81, length 64 (ipip-proto-4)
                                                                        14:38:04.600384 IP 172.17.61.32 > 172.17.61.33: IP 192.168.2.2 > 192.168.1.2: ICMP echo request, id 3072, seq 82length 64 (ipip-proto-4)
                                                                          / # ping 114.114.114.114
                                                                          PING 114.114.114.114 (114.114.114.114): 56 data bytes
                                                                          64 bytes from 114.114.114.114: seq=0 ttl=63 time=18.576 ms
                                                                          64 bytes from 114.114.114.114: seq=1 ttl=69 time=19.023 ms
                                                                          64 bytes from 114.114.114.114: seq=2 ttl=87 time=17.733 ms
                                                                          [root@calico-worker01 ~]# tcpdump -i eth0 -nn host 114.114.114.114
                                                                          tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
                                                                          listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
                                                                          16:11:16.295633 IP 10.1.112.114 > 114.114.114.114: ICMP echo request, id 7168, seq 164, length 64
                                                                          16:11:16.313125 IP 114.114.114.114 > 10.1.112.114: ICMP echo reply, id 7168, seq 164, length 64
                                                                          16:11:17.295839 IP 10.1.112.114 > 114.114.114.114: ICMP echo request, id 7168, seq 165, length 64
                                                                          16:11:17.313148 IP 114.114.114.114 > 10.1.112.114: ICMP echo reply, id 7168, seq 165length 64
                                                                          #Calico模型的各Pod可以直接与互联网通信,10.1.112.114是worker01节点与互联网通信的网卡。

                                                                          tips:通过在节点网络名称空间抓包分析得知,Pod之间的通信通过IPIP三层隧道转发,外层IP首部外双方节点IP,内层IP首部为通信双方的Pod IP地址。

                                                                          3.2 Calico客户端工具

                                                                          • 下载Calico的二进制文件

                                                                            [root@calico-master ~]# wget https://github.com/projectcalico/calicoctl/releases/download/v3.16.4/calicoctl -O /usr/bin/
                                                                            [root@calico-master ~]# chmod a+x calicoctl
                                                                            [root@calico-master ~]# mkdir -pv /etc/calico/
                                                                            • 配置认证信息

                                                                              [root@calico-master ~]# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get nodes
                                                                              NAME
                                                                              calico-master
                                                                              calico-worker01
                                                                              calico-worker02
                                                                              [root@calico-master ~]# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get ipPool -o yaml
                                                                              apiVersion: projectcalico.org/v3
                                                                              items:
                                                                              - apiVersion: projectcalico.org/v3
                                                                              kind: IPPool
                                                                              metadata:
                                                                              creationTimestamp: "2020-10-31T06:35:02Z"
                                                                              name: default-ipv4-ippool
                                                                              resourceVersion: "1433"
                                                                              uid: 600ad0bf-76bb-462d-b484-79ab8871c26d
                                                                              spec:
                                                                              blockSize: 26
                                                                              cidr: 192.168.0.0/16
                                                                              ipipMode: Always
                                                                              natOutgoing: true
                                                                              nodeSelector: all()
                                                                              kind: IPPoolList
                                                                              metadata:
                                                                              resourceVersion: "16639"

                                                                              tips:Calicoctl通过读取Calico的数据存储系统(datastore)进行查看,需要提供认证信息,可以通过环境变量提供,也可以在配置文件中提供。

                                                                                [root@calico-master ~]# cat  /etc/calico/calicoctl.cfg 
                                                                                apiVersion: projectcalico.org/v3
                                                                                kind: CalicoAPIConfig
                                                                                metadata:
                                                                                spec:
                                                                                datastoreType: "kubernetes"
                                                                                kubeconfig: "/root/.kube/config"

                                                                                  [root@calico-master ~]# calicoctl node  status
                                                                                  Calico process is running.


                                                                                  IPv4 BGP status
                                                                                  +--------------+-------------------+-------+----------+-------------+
                                                                                  | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
                                                                                  +--------------+-------------------+-------+----------+-------------+
                                                                                  | 172.17.61.32 | node-to-node mesh | up | 06:35:13 | Established |
                                                                                  | 172.17.61.33 | node-to-node mesh | up | 06:35:19 | Established |
                                                                                  +--------------+-------------------+-------+----------+-------------+


                                                                                  IPv6 BGP status
                                                                                  No IPv6 peers found.


                                                                                  文章转载自运维扫盲人,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

                                                                                  评论