Docker Bridge网络模型中,所有发往Docker主机外部的流量或者接受Docker主机外部的流量都必须经过源地址转换或者目的地址转换;
CNI(Container Network Interface)只是一个容器网络谁规范,生产中还需要具体的实现,目前比较流行的实现有Flannel、Calico、Canal、Weave Net。
CNI提供三类插件:main、meta、ipam。main主要实现某种特定的网络功能,如loopback、bridge、macvlan、ipvlan等;meta用于调用其他插件,不提供任何网络实现;ipam用于实现分配IP,不提供网络实现。
1、Flannel插件
flannel是一个为kubernetes提供叠加网络的网络插件,基于linux TUN/TAP,使用UDP封装IP报文来创建叠加网络,并借助etcd维护网络分配情况;
为了解决地址冲突flannel预留一个网络10.244.0.0/16,为每个Docker容器引擎分配一个子网,例如10.244.1.0/24、10.244.2.0/24;
为了解决个网络之间缺少路由,flannel有三种解决办法:VxLAN、host-gw。
VxLAN:使用隧道封装机制转发Pod之间的通信,是flannel默认模型,VxLAN报文一律不分片,因此有求物理链路层有足够大的MTU值,VxLAN还支持DirectRouting模式,通过添加必要的路由完成二层网络Pod之间额通信,只有在不同的IP网络中才启用VxLAN隧道封装机制;
host-gw:通过在节点上创建到达目标容器地址的路由主节转发报文;
#VxLAN的flannel配置信息https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymnet-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
#VxLAN DirectRouting模型的flannel配置信息net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan""Directrouting": true}}
#host-gw模型的flannel配置信息net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "host-gw"}}
2、Canal插件
Canal由flannel和Calico联合发布的一个统一网络插件,提供CNI网络插件,并支持网络策略;
kubernetes使用标准资源对象NetworkPolicy供管理员按需定义网络访问控制策略,NetworkPolicy为名称空间级别的资源,它的作用范围为所属名称空间;
Canal插件将Calico与flannel组合部署在一起提供丰富的网络控制策略解决方案,目前Canal直接使用Calico和flannel项目,代码本身没有任何修改;
CNI插件需要启用,Calico安装为CNI插件。必须通过传递--network-plugin=cni参数将kubelet配置为使用CNI网络。(在kubeadm上,这是默认设置。)
我们这里使用Kubernetes的etcd进行安装,首先确保Kubernetes设置--cluster-cidr=10.244.0.0/16和--allocate-node-cidrs=true。(kubeadm是默认安装的)
2.1 创建RBAC
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
[root@k8s-master-01 manifests]# cat rbac-canal.yaml# {{site.prodname}} Roles# Reference {{site.url}}/{{page.version}}/getting-started/kubernetes/installation/hosted/rbac-kdd.yamlkind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: calicorules:- apiGroups: [""]resources:- namespaces- serviceaccountsverbs:- get- list- watch- apiGroups: [""]resources:- pods/statusverbs:- patch- apiGroups: [""]resources:- podsverbs:- get- list- watch- apiGroups: [""]resources:- servicesverbs:- get- apiGroups: [""]resources:- endpointsverbs:- get- apiGroups: [""]resources:- nodesverbs:- get- list- update- watch- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- get- list- watch- apiGroups: ["crd.projectcalico.org"]resources:- globalfelixconfigs- felixconfigurations- bgppeers- globalbgpconfigs- globalnetworksets- hostendpoints- bgpconfigurations- ippools- globalnetworkpolicies- networkpolicies- clusterinformationsverbs:- create- get- list- update- watch---# Flannel roles# Pulled from https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.ymlkind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: flannelrules:- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch---# Bind the flannel ClusterRole to the canal ServiceAccount.kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: canal-flannelroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannelsubjects:- kind: ServiceAccountname: canalnamespace: kube-system---# Bind the {{site.prodname}} ClusterRole to the canal ServiceAccount.apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: canal-calicoroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calicosubjects:- kind: ServiceAccountname: canalnamespace: kube-system
2.2 安装Canal
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml
[root@k8s-master-01 manifests]# cat canal.yaml# Canal Version v3.3.7# https://docs.projectcalico.org/v3.3/releases#v3.3.7# This manifest includes the following component versions:# calico/node:v3.3.7# calico/cni:v3.3.7# coreos/flannel:v0.9.1# This ConfigMap is used to configure a self-hosted Canal installation.kind: ConfigMapapiVersion: v1metadata:name: canal-confignamespace: kube-systemdata:# The interface used by canal for host <-> host communication.# If left blank, then the interface is chosen using the node's# default route.canal_iface: ""# Whether or not to masquerade traffic to destinations not within# the pod network.masquerade: "true"# The CNI network configuration to install on each node. The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.0","plugins": [{"type": "calico","log_level": "info","datastore_type": "kubernetes","nodename": "__KUBERNETES_NODE_NAME__","ipam": {"type": "host-local","subnet": "usePodCidr"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}# Flannel network configuration. Mounted into the flannel container.net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}---# This manifest installs the calico/node container, as well# as the Calico CNI plugins and network config on# each master and worker node in a Kubernetes cluster.kind: DaemonSetapiVersion: apps/v1metadata:name: canalnamespace: kube-systemlabels:k8s-app: canalspec:selector:matchLabels:k8s-app: canalupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: canalannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure canal gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: canal# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0containers:# Runs calico/node container on each Kubernetes node. This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.3.7env:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Don't enable BGP.- name: CALICO_NETWORKING_BACKENDvalue: "none"# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,canal"# Period, in seconds, at which felix re-applies all iptables state- name: FELIX_IPTABLESREFRESHINTERVALvalue: "60"# No IP address needed.- name: IPvalue: ""# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:httpGet:path: livenessport: 9099host: localhostperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:httpGet:path: readinessport: 9099host: localhostperiodSeconds: 10volumeMounts:- mountPath: lib/modulesname: lib-modulesreadOnly: true- mountPath: run/xtables.lockname: xtables-lockreadOnly: false- mountPath: var/run/caliconame: var-run-calicoreadOnly: false- mountPath: var/lib/caliconame: var-lib-calicoreadOnly: false# This container installs the Calico CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.3.7command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-canal.conflist"# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: canal-configkey: cni_network_configvolumeMounts:- mountPath: host/opt/cni/binname: cni-bin-dir- mountPath: host/etc/cni/net.dname: cni-net-dir# This container runs flannel using the kube-subnet-mgr backend# for allocating subnets.- name: kube-flannelimage: quay.io/coreos/flannel:v0.9.1command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]securityContext:privileged: trueenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: FLANNELD_IFACEvalueFrom:configMapKeyRef:name: canal-configkey: canal_iface- name: FLANNELD_IP_MASQvalueFrom:configMapKeyRef:name: canal-configkey: masqueradevolumeMounts:- mountPath: run/xtables.lockname: xtables-lockreadOnly: false- name: flannel-cfgmountPath: etc/kube-flannel/volumes:# Used by calico/node.- name: lib-moduleshostPath:path: lib/modules- name: var-run-calicohostPath:path: var/run/calico- name: var-lib-calicohostPath:path: var/lib/calico- name: xtables-lockhostPath:path: run/xtables.locktype: FileOrCreate# Used by flannel.- name: flannel-cfgconfigMap:name: canal-config# Used to install CNI.- name: cni-bin-dirhostPath:path: opt/cni/bin- name: cni-net-dirhostPath:path: etc/cni/net.d---apiVersion: v1kind: ServiceAccountmetadata:name: canalnamespace: kube-system---# Create all the CustomResourceDefinitions needed for# Calico policy and networking mode.apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: felixconfigurations.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: FelixConfigurationplural: felixconfigurationssingular: felixconfiguration---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: bgpconfigurations.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPConfigurationplural: bgpconfigurationssingular: bgpconfiguration---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: ippools.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPPoolplural: ippoolssingular: ippool---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: hostendpoints.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: HostEndpointplural: hostendpointssingular: hostendpoint---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: clusterinformations.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: ClusterInformationplural: clusterinformationssingular: clusterinformation---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: globalnetworkpolicies.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkPolicyplural: globalnetworkpoliciessingular: globalnetworkpolicy---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: globalnetworksets.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkSetplural: globalnetworksetssingular: globalnetworkset---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: networkpolicies.crd.projectcalico.orgspec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkPolicyplural: networkpoliciessingular: networkpolicy
2.3 利用Canal管控入站流量
设置默认Ingress策略
[root@k8s-master-01 cni]# cat ingress-deny.yamlapiVersion: v1kind: Namespacemetadata:name: networkpolicy-test---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-all-ingressnamespace: networkpolicy-testspec:podSelector: {}policyTypes: [ "Ingress" ][root@k8s-master-01 cni]# kubectl get networkpolicy -n networkpolicy-testNAME POD-SELECTOR AGEdeny-all-ingress <none> 3m44s#policytypes字段指明要生效的ingress类型的规则,但是未定义任何ingress字段,因此不能匹配任何源端点,因此拒绝任何入站流量;
[root@k8s-master-01 cni]# cat ingress-allow.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-allow-ingressnamespace: networkpolicy-testspec:podSelector: {}policyTypes: [ "Ingress" ]ingress:- {}#ingress字段为空以匹配所有源端点,因此外允许所有流量,本身默认的规则即为允许所有,无需定义;
放行特定入站流量
(A)仅定义from字段时将隐含本地所有Podcast资源的所有端口;
(B)仅定义Ports字段时将隐含所有的源端点(地址);
(C)同时定义from和ports字段时表示逻辑与关系;
(D)from字段为一个对象列表(ipBlock、namespaceSelector、podSelector、ports、port、protocol),彼此之间为逻辑与关系;
[root@k8s-master-01 cni]# cat allow-myapp-ingress.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: allow-myapp-ingressnamespace: networkpolicy-testspec:podSelector:matchLabels:app: myapppolicyTypes: [ "Ingress" ]ingress:- from:- ipBlock:cidr: 10.244.0.0/16except:- 10.244.3.0/24- podSelector:matchLabels:app: myappports:- protocol: TCPport: 80#将标签app=myapp的pod资源的80/TCP端口开放给10.244.0.0/16网络内除了10.244.3.0.24子网中的所有端点,以及名称空间中拥有标签app=myapp的所有pod资源访问;
2.4 利用Canal管控出站流量
设置默认egress策略
[root@k8s-master-01 cni]# cat egress-deny.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-all-egressnamespace: networkpolicy-testspec:podSelector: {}policyTypes: [ "Egress" ]#policytypes字段指明要生效的egress类型的规则,但是未定义任何egress字段,因此不能匹配任何源端点,因此拒绝任何出站流量;
放行特定出站流量
[root@k8s-master-01 cni]# cat allow-myapp-egress.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: allow-tomcat-egressnamespace: networkpolicy-testspec:podSelector:matchLabels:app: tomcatpolicyTypes: [ "Egress" ]egress:- to:- podSelector:matchLabels:app: nginxports:- protocol: TCPport: 80- to:- podSelector:matchLabels:app: mysqlports:- protocol: TCPport: 3306#对来自"app=tomcat"标签的Pod对象的到达标签为"app=nginx"的Pod对象的80端口,以及到达标签为"app=mysql"的Pod对象的3306端口的流量给与放行;
2.5 利用Canal隔离名称空间
[root@k8s-master-01 cni]# cat namespace-isolated.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: namespace-deny-allnamespace: defaultspec:podSelector: {}policyTypes: [ "Ingress","Egress" ]---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: namespace-isolatednamespace: defaultspec:podSelector: {}policyTypes: [ "Ingress","Egress" ]ingress:- from:- namespaceSelector:matchExpressions:- key: nameoperator: Invalues: [ "dafault","kube-system" ]egress:- to:- namespaceSelector:matchExpressions:- key: nameoperator: Invalues: [ "dafault","kube-system" ][root@k8s-master-01 cni]# kubectl get networkpolicyNAME POD-SELECTOR AGEnamespace-deny-all <none> 43snamespace-isolated <none> 43s
2.6 Canal网络策略应用案例
1)my-app Pod仅允许来自nginx Pod的流量访问其80/TCP端口,但可以向nginx Pod的所有端口发出出站流量。
2)nginx Pod允许任何源端点对其80/TCP端口的访问,并能向任意端点发出出站流量。
3)myapp Pod和nginx Pod都可以与kube-system名称空间的任意Pod进行任何类型的通信,以便于使用kube-dns提供的名称解析服务等。
创建测试客户端
[root@k8s-master-01 ~]# kubectl run cirrors-$RANDOM --namespace=default --rm -it --image=cirros -- sh
创建pod和svc对象nginx与myapp
[root@k8s-master-01 cni]# kubectl run nginx --image=nginx:alpine --replicas=1 --namespace=test-networkpolicy --port=80 --expose --labels app=nginxservice/nginx createdpod/nginx created[root@k8s-master-01 cni]# kubectl get svc -n test-networkpolicy -o wide -L appNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR APPnginx ClusterIP 10.108.53.166 <none> 80/TCP 5m33s app=nginx nginx#通过svc暴露服务在本例子中并不是必须的,下文尝试使用pod来测试;
/ # curl -I nginx.test-networkpolicyHTTP/1.1 200 OKServer: nginx/1.19.2Date: Fri, 30 Oct 2020 09:08:01 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 11 Aug 2020 15:16:45 GMTConnection: keep-aliveETag: "5f32b65d-264"Accept-Ranges: bytes#当创建svc时便可通过svc相关的名称进行访问;
创建pod对象nginx与myapp
[root@k8s-master-01 cni]# kubectl get pod -n test-networkpolicy -o wide -L appNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES APPmyapp-5cbd66595b-ls856 1/1 Running 0 6m12s 10.244.3.7 k8s-worker-02 <none> <none> myappnginx 1/1 Running 0 4m36s 10.244.3.8 k8s-worker-02 <none> <none> nginxnginx-96dfd7479-g99z7 1/1 Running 0 6m12s 10.244.1.9 k8s-worker-01 <none> <none> nginx
[root@k8s-master-01 cni]# cat test.yamlapiVersion: v1kind: Namespacemetadata:name: test-networkpolicy---apiVersion: apps/v1kind: Deploymentmetadata:name: nginxnamespace: test-networkpolicyspec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:alpineports:- containerPort: 80name: http---apiVersion: apps/v1kind: Deploymentmetadata:name: myappnamespace: test-networkpolicyspec:replicas: 1selector:matchLabels:app: myapptemplate:metadata:labels:app: myappspec:containers:- name: myappimage: ikubernetes/myapp:v1ports:- containerPort: 80name: http
/ # curl -I 10.244.1.9HTTP/1.1 200 OKServer: nginx/1.12.2Date: Fri, 30 Oct 2020 09:04:22 GMTContent-Type: text/htmlContent-Length: 65Last-Modified: Fri, 02 Mar 2018 03:39:12 GMTConnection: keep-aliveETag: "5a98c760-41"Accept-Ranges: bytes
设置默认策略为全拒绝
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-all-trafficnamespace: test-networkpolicyspec:podSelector: {}policyTypes:- Ingress- Egress
/ # curl -I nginx.test-networkpolicycurl: (7) Failed to connect to nginx.test-networkpolicy port 80: Connection timed out/ # curl -I 10.244.3.7curl: (7) Failed to connect to 10.244.3.5 port 80: Connection timed out
放行nginx Pod之上的80/TCP和nginx Pod与kube-system之间的所有流量
[root@k8s-master-01 cni]# kubectl label namespace kube-system ns=kube-system#为了放行default namespace现在为default namespace打标签,便于引用;
[root@k8s-master-01 cni]# cat test-allow-nginx.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: nginx-allow-allnamespace: test-networkpolicyspec:podSelector:matchLabels:app: nginxingress:- ports:- port: 80- from:- namespaceSelector:matchLabels:ns: kube-systemegress:- to:policyTypes:- Ingress- Egress#经测试- ports与- from为或的关系,因为default namespace中的Pod 对象cirrors-28721可以与nginx Pod通信
/ # curl -I 10.244.1.9HTTP/1.1 200 OKServer: nginx/1.19.2Date: Sat, 31 Oct 2020 01:00:45 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 11 Aug 2020 15:16:45 GMTConnection: keep-aliveETag: "5f32b65d-264"Accept-Ranges: bytes/ # curl -I 10.244.3.7curl: (7) Failed to connect to 10.244.3.7 port 80: Connection timed out#通过测试,只有nginx的80/TCP可以连接,myapp的80/TCP拒绝了请求;
放行test-networkpolicy名称空间来自nginx Pod对象发往myapp Pod的80/TCP的访问请求以及与kube-system之间的任何请求
[root@k8s-master-01 cni]# cat test-allow-nginx-between-myapp.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: allow-nginx-between-myappnamespace: test-networkpolicyspec:podSelector:matchLabels:app: myappingress:- from:- namespaceSelector:matchLabels:ns: kube-system- from:- podSelector:matchLabels:app: nginxports:- port: 80egress:- to:- namespaceSelector:matchLabels:ns: kube-system- to:- podSelector:matchLabels:app: nginxpolicyTypes:- Ingress- Egres
/ # curl -I 10.244.3.7curl: (7) Failed to connect to 10.244.3.7 port 80: Connection timed out#使用default namespace的Pod测试,无法连接;
[root@k8s-master-01 ~]# kubectl exec -it nginx -n test-networkpolicy -- bin/sh/ # curl -I 10.244.3.7HTTP/1.1 200 OKServer: nginx/1.12.2Date: Sat, 31 Oct 2020 01:24:39 GMTContent-Type: text/htmlContent-Length: 65Last-Modified: Fri, 02 Mar 2018 03:39:12 GMTConnection: keep-aliveETag: "5a98c760-41"Accept-Ranges: bytes#使用test-networkpolicy namespace中的app=nginx svc对应的backend nginx pod测试成功;#本次使用test-networkpolicy namespace中的nginx-96dfd7479-g99z7未测试成功;
3、Calico插件
Calico是一个基于BGP的三层网络插件,并且支持网络策略实现网络控制,它在每台主机上运行一个vRouter并为其分配一个IP,利用linux内核转发网络数据包,借助iptables实现防火墙功能。
需要注意的是Calico分配地址池应与kubernetes集群的--pod-network-cidr的值保持一致。
由于每个Pod都有达到各子网的路由,因此每个工作负载可以直接配置公网IP接入互联网;
Calico的IPIP包头很小,效率高于VxLAN,在大规模集群中需要使用分布式BGP反射器模型;
3.1 部署Calico提供网络服务和网络策略
注意:本示例中集群未部署其他任何网络插件,在初始化集群时务必不能部署flannel网络插件;
部署rbac资源
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
# Calico Version v3.3.7# https://docs.projectcalico.org/v3.3/releases#v3.3.7kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: calico-noderules:- apiGroups: [""]resources:- namespaces- serviceaccountsverbs:- get- list- watch- apiGroups: [""]resources:- pods/statusverbs:- patch- apiGroups: [""]resources:- podsverbs:- get- list- watch- apiGroups: [""]resources:- servicesverbs:- get- apiGroups: [""]resources:- endpointsverbs:- get- apiGroups: [""]resources:- nodesverbs:- get- list- update- watch- apiGroups: ["extensions"]resources:- networkpoliciesverbs:- get- list- watch- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list- apiGroups: ["crd.projectcalico.org"]resources:- globalfelixconfigs- felixconfigurations- bgppeers- globalbgpconfigs- bgpconfigurations- ippools- globalnetworkpolicies- globalnetworksets- networkpolicies- clusterinformations- hostendpointsverbs:- create- get- list- update- watch---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: calico-noderoleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-nodesubjects:- kind: ServiceAccountname: calico-nodenamespace: kube-system
部署Calico插件
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
# Calico Version v3.3.7# https://docs.projectcalico.org/v3.3/releases#v3.3.7# This manifest includes the following component versions:# calico/node:v3.3.7# calico/cni:v3.3.7# This ConfigMap is used to configure a self-hosted Calico installation.kind: ConfigMapapiVersion: v1metadata:name: calico-confignamespace: kube-systemdata:# To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas# below. We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is# essential.typha_service_name: "none"# Configure the Calico backend to use.calico_backend: "bird"# Configure the MTU to useveth_mtu: "1440"# The CNI network configuration to install on each node. The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.0","plugins": [{"type": "calico","log_level": "info","datastore_type": "kubernetes","nodename": "__KUBERNETES_NODE_NAME__","mtu": __CNI_MTU__,"ipam": {"type": "host-local","subnet": "usePodCidr"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}---# This manifest creates a Service, which will be backed by Calico's Typha daemon.# Typha sits in between Felix and the API server, reducing Calico's load on the API server.apiVersion: v1kind: Servicemetadata:name: calico-typhanamespace: kube-systemlabels:k8s-app: calico-typhaspec:ports:- port: 5473protocol: TCPtargetPort: calico-typhaname: calico-typhaselector:k8s-app: calico-typha---# This manifest creates a Deployment of Typha to back the above service.apiVersion: apps/v1beta1kind: Deploymentmetadata:name: calico-typhanamespace: kube-systemlabels:k8s-app: calico-typhaspec:# Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the# typha_service_name variable in the calico-config ConfigMap above.## We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential# (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In# production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade.replicas: 0revisionHistoryLimit: 2template:metadata:labels:k8s-app: calico-typhaannotations:# This, along with the CriticalAddonsOnly toleration below, marks the pod as a critical# add-on, ensuring it gets priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists# Since Calico can't network a pod until Typha is up, we need to run Typha itself# as a host-networked pod.serviceAccountName: calico-nodecontainers:- image: calico/typha:v3.3.7name: calico-typhaports:- containerPort: 5473name: calico-typhaprotocol: TCPenv:# Enable "info" logging by default. Can be set to "debug" to increase verbosity.- name: TYPHA_LOGSEVERITYSCREENvalue: "info"# Disable logging to file and syslog since those don't make sense in Kubernetes.- name: TYPHA_LOGFILEPATHvalue: "none"- name: TYPHA_LOGSEVERITYSYSvalue: "none"# Monitor the Kubernetes API to find the number of running instances and rebalance# connections.- name: TYPHA_CONNECTIONREBALANCINGMODEvalue: "kubernetes"- name: TYPHA_DATASTORETYPEvalue: "kubernetes"- name: TYPHA_HEALTHENABLEDvalue: "true"# Uncomment these lines to enable prometheus metrics. Since Typha is host-networked,# this opens a port on the host, which may need to be secured.#- name: TYPHA_PROMETHEUSMETRICSENABLED# value: "true"#- name: TYPHA_PROMETHEUSMETRICSPORT# value: "9093"livenessProbe:exec:command:- calico-typha- check- livenessperiodSeconds: 30initialDelaySeconds: 30readinessProbe:exec:command:- calico-typha- check- readinessperiodSeconds: 10---# This manifest creates a Pod Disruption Budget for Typha to allow K8s Cluster Autoscaler to evictapiVersion: policy/v1beta1kind: PodDisruptionBudgetmetadata:name: calico-typhanamespace: kube-systemlabels:k8s-app: calico-typhaspec:maxUnavailable: 1selector:matchLabels:k8s-app: calico-typha---# This manifest installs the calico/node container, as well# as the Calico CNI plugins and network config on# each master and worker node in a Kubernetes cluster.kind: DaemonSetapiVersion: extensions/v1beta1metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-nodespec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0containers:# Runs calico/node container on each Kubernetes node. This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.3.7env:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Typha support: controlled by the ConfigMap.- name: FELIX_TYPHAK8SSERVICENAMEvalueFrom:configMapKeyRef:name: calico-configkey: typha_service_name# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:httpGet:path: /livenessport: 9099host: localhostperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -bird-ready- -felix-readyperiodSeconds: 10volumeMounts:- mountPath: lib/modulesname: lib-modulesreadOnly: true- mountPath: run/xtables.lockname: xtables-lockreadOnly: false- mountPath: var/run/caliconame: var-run-calicoreadOnly: false- mountPath: var/lib/caliconame: var-lib-calicoreadOnly: false# This container installs the Calico CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.3.7command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtuvolumeMounts:- mountPath: host/opt/cni/binname: cni-bin-dir- mountPath: host/etc/cni/net.dname: cni-net-dirvolumes:# Used by calico/node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d---apiVersion: v1kind: ServiceAccountmetadata:name: calico-nodenamespace: kube-system---# Create all the CustomResourceDefinitions needed for# Calico policy and networking mode.apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: felixconfigurations.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: FelixConfigurationplural: felixconfigurationssingular: felixconfiguration---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: bgppeers.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPPeerplural: bgppeerssingular: bgppeer---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: bgpconfigurations.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPConfigurationplural: bgpconfigurationssingular: bgpconfiguration---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: ippools.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPPoolplural: ippoolssingular: ippool---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: hostendpoints.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: HostEndpointplural: hostendpointssingular: hostendpoint---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: clusterinformations.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: ClusterInformationplural: clusterinformationssingular: clusterinformation---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: globalnetworkpolicies.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkPolicyplural: globalnetworkpoliciessingular: globalnetworkpolicy---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: globalnetworksets.crd.projectcalico.orgspec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkSetplural: globalnetworksetssingular: globalnetworkset---apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata:name: networkpolicies.crd.projectcalico.orgspec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkPolicyplural: networkpoliciessingular: networkpolicy
查看Calico各服务状态
[root@calico-master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONcalico-master Ready master 13m v1.18.5calico-worker01 Ready <none> 8m41s v1.18.5calico-worker02 Ready <none> 10m v1.18.5[root@calico-master ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcalico-node-dtxw7 2/2 Running 0 7m15scalico-node-plmcr 2/2 Running 0 7m15scalico-node-qt6xm 2/2 Running 0 7m15scoredns-6f5c7bbdfb-8zf9l 1/1 Running 0 13mcoredns-6f5c7bbdfb-pqqs4 1/1 Running 0 13metcd-calico-master 1/1 Running 0 13mkube-apiserver-calico-master 1/1 Running 0 13mkube-controller-manager-calico-master 1/1 Running 0 13mkube-proxy-gblzj 1/1 Running 0 10mkube-proxy-pgbwm 1/1 Running 0 13mkube-proxy-tvw9z 1/1 Running 0 8m54skube-scheduler-calico-master 1/1 Running 0 13m[root@calico-master ~]# kubectl get pods -l k8s-app=calico-node -o wide -n kube-systemNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScalico-node-dtxw7 2/2 Running 0 8m4s 172.17.61.32 calico-worker01 <none> <none>calico-node-plmcr 2/2 Running 0 8m4s 172.17.61.33 calico-worker02 <none> <none>calico-node-qt6xm 2/2 Running 0 8m4s 172.17.61.31 calico-master <none> <none>
calico网络模型
Calico会在在每个节点生成到达每个kubernetes集群节点上的Pod子网的路由信息;
节点上每创建一个Podcast资源,都会由Calico自动创建一对虚拟以太网接口,其中一个留在Pod网络的名称空间,一个留在节点的根网络名称空间,以cali为前缀后跟随机字符;
[root@calico-master ~]# ifconfig tunl0tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440inet 192.168.0.1 netmask 255.255.255.255tunnel txqueuelen 1000 (IPIP Tunnel)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[root@calico-master ~]# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 *192.168.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali7607cae07ed192.168.0.3 0.0.0.0 255.255.255.255 UH 0 0 0 cali12205d7848a192.168.1.0 172.17.61.33 255.255.255.0 UG 0 0 0 tunl0192.168.2.0 172.17.61.32 255.255.255.0 UG 0 0 0 tunl0[root@calico-master ~]# ip route showblackhole 192.168.0.0/24 proto bird192.168.0.2 dev cali7607cae07ed scope link192.168.0.3 dev cali12205d7848a scope link192.168.1.0/24 via 172.17.61.33 dev tunl0 proto bird onlink192.168.2.0/24 via 172.17.61.32 dev tunl0 proto bird onlink[root@calico-worker01 ~]# ifconfig tunl0tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440inet 192.168.2.1 netmask 255.255.255.255tunnel txqueuelen 1000 (IPIP Tunnel)RX packets 313 bytes 26292 (25.6 KiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 313 bytes 26292 (25.6 KiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[root@calico-worker01 ~]# ifconfig calia5c8a550e8e@if4calia5c8a550e8e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[root@calico-worker01 ~]# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface192.168.0.0 172.17.61.31 255.255.255.0 UG 0 0 0 tunl0192.168.1.0 172.17.61.33 255.255.255.0 UG 0 0 0 tunl0192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 *192.168.2.2 0.0.0.0 255.255.255.255 UH 0 0 0 calia5c8a550e8e[root@calico-worker02 ~]# ifconfig tunl0tunl0: flags=193<UP,RUNNING,NOARP> mtu 1440inet 192.168.1.1 netmask 255.255.255.255tunnel txqueuelen 1000 (IPIP Tunnel)RX packets 313 bytes 26292 (25.6 KiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 313 bytes 26292 (25.6 KiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[root@calico-worker02 ~]# ifconfig cali6b60c0e802b@if4cali6b60c0e802b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[root@calico-worker02 ~]# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface192.168.0.0 172.17.61.31 255.255.255.0 UG 0 0 0 tunl0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 *192.168.1.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali6b60c0e802b192.168.2.0 172.17.61.32 255.255.255.0 UG 0 0 0 tunl0#由此可以看出,Calico会在在每个节点生成到达每个kubernetes集群节点上的Pod子网的路由信息;
[root@calico-master ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmydeployment-5cbd66595b-qlqgd 1/1 Running 0 14m 192.168.2.2 calico-worker01 <none> <none>mydeployment-5cbd66595b-wjfgp 1/1 Running 0 14m 192.168.1.2 calico-worker02 <none> <none>
[root@calico-master ~]# kubectl exec mydeployment-5cbd66595b-qlqgd -it /bin/sh/ # ping 192.168.1.2PING 192.168.1.2 (192.168.1.2): 56 data bytes64 bytes from 192.168.1.2: seq=0 ttl=62 time=0.786 ms64 bytes from 192.168.1.2: seq=1 ttl=62 time=0.508 ms64 bytes from 192.168.1.2: seq=2 ttl=62 time=0.510 msroot@calico-worker01 ~]# tcpdump -i eth1 -nn host 172.17.61.32 and host 172.17.61.33tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes14:38:02.599748 IP 172.17.61.32 > 172.17.61.33: IP 192.168.2.2 > 192.168.1.2: ICMP echo request, id 3072, seq 80, length 64 (ipip-proto-4)14:38:02.600099 IP 172.17.61.33 > 172.17.61.32: IP 192.168.1.2 > 192.168.2.2: ICMP echo reply, id 3072, seq 80, length 64 (ipip-proto-4)14:38:03.600130 IP 172.17.61.32 > 172.17.61.33: IP 192.168.2.2 > 192.168.1.2: ICMP echo request, id 3072, seq 81, length 64 (ipip-proto-4)14:38:03.600559 IP 172.17.61.33 > 172.17.61.32: IP 192.168.1.2 > 192.168.2.2: ICMP echo reply, id 3072, seq 81, length 64 (ipip-proto-4)14:38:04.600384 IP 172.17.61.32 > 172.17.61.33: IP 192.168.2.2 > 192.168.1.2: ICMP echo request, id 3072, seq 82, length 64 (ipip-proto-4)
/ # ping 114.114.114.114PING 114.114.114.114 (114.114.114.114): 56 data bytes64 bytes from 114.114.114.114: seq=0 ttl=63 time=18.576 ms64 bytes from 114.114.114.114: seq=1 ttl=69 time=19.023 ms64 bytes from 114.114.114.114: seq=2 ttl=87 time=17.733 ms[root@calico-worker01 ~]# tcpdump -i eth0 -nn host 114.114.114.114tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes16:11:16.295633 IP 10.1.112.114 > 114.114.114.114: ICMP echo request, id 7168, seq 164, length 6416:11:16.313125 IP 114.114.114.114 > 10.1.112.114: ICMP echo reply, id 7168, seq 164, length 6416:11:17.295839 IP 10.1.112.114 > 114.114.114.114: ICMP echo request, id 7168, seq 165, length 6416:11:17.313148 IP 114.114.114.114 > 10.1.112.114: ICMP echo reply, id 7168, seq 165, length 64#Calico模型的各Pod可以直接与互联网通信,10.1.112.114是worker01节点与互联网通信的网卡。
tips:通过在节点网络名称空间抓包分析得知,Pod之间的通信通过IPIP三层隧道转发,外层IP首部外双方节点IP,内层IP首部为通信双方的Pod IP地址。
3.2 Calico客户端工具
下载Calico的二进制文件
[root@calico-master ~]# wget https://github.com/projectcalico/calicoctl/releases/download/v3.16.4/calicoctl -O /usr/bin/[root@calico-master ~]# chmod a+x calicoctl[root@calico-master ~]# mkdir -pv /etc/calico/
配置认证信息
[root@calico-master ~]# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get nodesNAMEcalico-mastercalico-worker01calico-worker02[root@calico-master ~]# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get ipPool -o yamlapiVersion: projectcalico.org/v3items:- apiVersion: projectcalico.org/v3kind: IPPoolmetadata:creationTimestamp: "2020-10-31T06:35:02Z"name: default-ipv4-ippoolresourceVersion: "1433"uid: 600ad0bf-76bb-462d-b484-79ab8871c26dspec:blockSize: 26cidr: 192.168.0.0/16ipipMode: AlwaysnatOutgoing: truenodeSelector: all()kind: IPPoolListmetadata:resourceVersion: "16639"
tips:Calicoctl通过读取Calico的数据存储系统(datastore)进行查看,需要提供认证信息,可以通过环境变量提供,也可以在配置文件中提供。
[root@calico-master ~]# cat /etc/calico/calicoctl.cfgapiVersion: projectcalico.org/v3kind: CalicoAPIConfigmetadata:spec:datastoreType: "kubernetes"kubeconfig: "/root/.kube/config"
[root@calico-master ~]# calicoctl node statusCalico process is running.IPv4 BGP status+--------------+-------------------+-------+----------+-------------+| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |+--------------+-------------------+-------+----------+-------------+| 172.17.61.32 | node-to-node mesh | up | 06:35:13 | Established || 172.17.61.33 | node-to-node mesh | up | 06:35:19 | Established |+--------------+-------------------+-------+----------+-------------+IPv6 BGP statusNo IPv6 peers found.




