|
这篇文章为大家分享有关在kubernetes集群部署Flannel组件的方法。文章涵盖Flannel组件的配置和部署方法,希望大家通过这篇文章能有所收获。
Flannel容器集群网络部署Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来VXLAN:将源数据包封装到UDP中,并使用基础网络的 IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS、VPC和GCE路由等数据转发方式
Flannel简介Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。Flannel原理Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。在默认的Docker配置中,每个Node的Docker服务会分别负责所在节点容器的IP分配。Node内部得容器之间可以相互访问,但是跨主机(Node)网络相互间是不能通信。Flannel设计目的就是为集群中所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得"同属一个内网"且"不重复的"IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。Flannel 使用etcd存储配置数据和子网分配信息。flannel 启动之后,后台进程首先检索配置和正在使用的子网列表,然后选择一个可用的子网,然后尝试去注册它。etcd也存储这个每个主机对应的ip。flannel 使用etcd的watch机制监视/coreos.com/network/subnets下面所有元素的变化信息,并且根据它来维护一个路由表。为了提高性能,flannel优化了Universal TAP/TUN设备,对TUN和UDP之间的ip分片做了代理。Flannel原理图
如图所示Flannel的工作原理可以解释为:
数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。Flannel通过Etcd服务维护了一张节点间的路由表,该张表里保存了各个节点主机的子网网段信息。源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达以后被解包,然后直接进入目的节点的flannel0虚拟网卡,然后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通信一样的由docker0路由到达目标容器。除了UDP,Flannel还支持很多其他的Backend:udp:使用用户态udp封装,默认使用8285端口。由于是在用户态封装和解包,性能上有较大的损失vxlan:vxlan封装,需要配置VNI,Port(默认8472)和GBPhost-gw:直接路由的方式,将容器网络的路由信息直接更新到主机的路由表中,仅适用于二层直接可达的网络aws-vpc:使用 Amazon VPC route table 创建路由,适用于AWS上运行的容器gce:使用Google Compute Engine Network创建路由,所有instance需要开启IP forwarding,适用于GCE上运行的容器ali-vpc:使用阿里云VPC route table 创建路由,适用于阿里云上运行的容器实验部署实验环境Master01:192.168.80.12Node01:192.168.80.13Node02:192.168.80.14本篇实验部署是接上篇文章部署的,所以实验环境不变,Flannel只需要在node节点部署,master中不需要部署Flannel部署在node01、node02节点中部署docker容器[root@node01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 ??//安装依赖包已加载插件:fastestmirrorbase ????????????????????????????????????????| 3.6 kB 00:00:00extras ???????????????????????????????????????| 2.9 kB 00:00:00...[root@node01 ~]# yum-config-manager --add-repo docker-ce.repo ???//设置阿里云镜像源已加载插件:fastestmirroradding repo from: docker-ce.repograbbing file docker-ce.repo to /etc/yum.repos.d/docker-ce.reporepo saved to /etc/yum.repos.d/docker-ce.repo[root@node01 ~]# yum install -y docker-ce ?????//安装Docker-CE已加载插件:fastestmirrordocker-ce-stable ??????????????????????????????????| 3.5 kB 00:00:00(1/2): docker-ce-stable/x86_64/updateinfo ??????????????????????| ?55 B 00:00:01(2/2): docker-ce-stable/x86_64/primary_db ??????????????????????| 37 kB 00:00:01Loading mirror speeds from cached hostfile...[root@node01 ~]# systemctl start docker.service ???//启动docker服务[root@node01 ~]# systemctl enable docker.service ??//配置开机自启Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@node01 ~]# tee /etc/docker/daemon.json <<-'EOF' ???????//配置镜像加速> {> ?"registry-mirrors": ["https://**********.aliyuncs.com"]> }> EOF{"registry-mirrors": ["https://**********.aliyuncs.com"]}[root@node01 ~]# systemctl daemon-reload ????//重新加载进程[root@node01 ~]# systemctl restart docker ???//重启docker[root@node01 ~]# vim /etc/sysctl.conf ?????//编辑开启路由转发功能...# For more information, see sysctl.conf(5) and sysctl.d(5).net.ipv4.ip_forward=1:wq[root@node01 ~]# sysctl -p ?????//重新加载net.ipv4.ip_forward = 1[root@node01 ~]# service network restart ????//重启网络Restarting network (via systemctl): ???????????[ 确定 ][root@node01 ~]# systemctl restart docker ??//重启docker服务[root@node01 ~]# docker versionClient: Docker Engine - Community ?????//查看docker版本Version: ?????19.03.5API version: ???1.40Go version: ???go1.12.12... ???????????//docker服务部署完成master01中操作[root@master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' ??????//写入分配的子网段到ETCD中,供flannel使用{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}[root@master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379" get /coreos.com/network/config ?????//查看是否成功写入{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}[root@master01 etcd-cert]# cd .. ????//回到k8s目录[root@master01 k8s]# ls ????????//查看flannel软件包是否存在cfssl.sh ?etcd-v3.3.10-linux-amd64 ?????kubernetes-server-linux-amd64.tar.gzetcd-cert etcd-v3.3.10-linux-amd64.tar.gzetcd.sh ?flannel-v0.10.0-linux-amd64.tar.gz[root@master01 k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz flannel.sh [email protected]:/root ?//将软件包拷贝到node01节点[email protected]'s password:flannel-v0.10.0-linux-amd64.tar.gz ????????????????????100% 9479KB 61.1MB/s ?00:00flannel.sh: No such file or directory[root@master01 k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz flannel.sh [email protected]:/root ?//将软件包拷贝到node02节点[email protected]'s password:flannel-v0.10.0-linux-amd64.tar.gz ????????????????????100% 9479KB 119.3MB/s ?00:00flannel.sh: No such file or directorynode01、node02节点同步操作
[root@node01 ~]# ls ?????//查看软件包是否成功拷贝anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz[root@node01 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz ??//解压软件包flanneldmk-docker-opts.shREADME.md[root@node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p ??//递归创建k8s工作目录[root@node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ ?//移动脚本文件到工作目录下的bin目录[root@node01 ~]# vim flannel.sh ????//编辑flannel执行脚本 并生成配置文件#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat <<EOF >/opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \-etcd-cafile=/opt/etcd/ssl/ca.pem \-etcd-certfile=/opt/etcd/ssl/server.pem \-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat <<EOF >/usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d / /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable flanneldsystemctl restart flanneld:wq[root@node01 ~]# bash flannel.sh https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379 ????//执行flannel脚本文件开启flannel网络功能Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.[root@node01 ~]# vim /usr/lib/systemd/system/docker.service ?//配置docker启动脚本连接flannel...[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerEnvironmentFile=/run/flannel/subnet.env ????//添加连接运行语句ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock ???????????//添加变量ExecReload=/bin/kill -s HUP $MAINPIDTimeoutSec=0...:wq[root@node01 ~]# cat /run/flannel/subnet.env ??//查看docker运行时连接flannel文件DOCKER_OPT_BIP="--bip=172.17.49.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.49.1/24 --ip-masq=false --mtu=1450" ?//bip指定启动时的子网 注意:此处node01与node02指定启动时的子网IP地址都属于172.17.0.0/24网段查看网络
[root@node01 ~]# systemctl daemon-reload ??//重新加载进程[root@node01 ~]# systemctl restart docker ??//重新启动docker[root@node01 ~]# ifconfig ??????????//查看网络信息docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ??inet 172.17.49.1 netmask 255.255.255.0 broadcast 172.17.49.255 ?//docker0网卡IP地址 ??...ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ?inet 192.168.80.13 netmask 255.255.255.0 broadcast 192.168.80.255 ?...flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 ?inet 172.17.49.0 netmask 255.255.255.255 broadcast 0.0.0.0 ??//flannel网卡地址 ?...lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 ?inet 127.0.0.1 netmask 255.0.0.0 ?... node02服务器操作
??[root@node02 ~]# ifconfig ?docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ?inet 172.17.63.1 netmask 255.255.255.0 broadcast 172.17.63.255 ?//docker网卡信息 ?... ?ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ?inet 192.168.80.14 netmask 255.255.255.0 broadcast 192.168.80.255 ?... flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 ?inet 172.17.63.0 netmask 255.255.255.255 broadcast 0.0.0.0 ??//flannel网卡信息 ?... lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 ?inet 127.0.0.1 netmask 255.0.0.0 ?... [root@node02 ~]# ping 172.17.49.1 ??????//使用ping命令测试网络是否互通 PING 172.17.49.1 (172.17.49.1) 56(84) bytes of data. 64 bytes from 172.17.49.1: icmp_seq=1 ttl=64 time=0.344 ms 64 bytes from 172.17.49.1: icmp_seq=2 ttl=64 time=0.333 ms 64 bytes from 172.17.49.1: icmp_seq=3 ttl=64 time=0.346 ms ^C --- 172.17.49.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.333/0.341/0.346/0.005 msnode01、node02节点服器操作[root@node01 ~]# docker run -it centos:7 /bin/bash ??//运行docker镜像Unable to find image 'centos:7' locally7: Pulling from library/centosab5ef0e58194: Pull completeDigest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813cStatus: Downloaded newer image for centos:7[root@e8ee45a4fd28 /]# yum install net-tools -y ??//容器中安装网络工具Loaded plugins: fastestmirror, ovlDetermining fastest mirrors* base: mirrors.163.com* extras: mirrors.163.com...node01服器操作
??[root@e8ee45a4fd28 /]# ifconfig ?????//查看网卡信息 ?eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 ?inet 172.17.49.2 netmask 255.255.255.0 broadcast 172.17.49.255 ?... ?lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 ?inet 127.0.0.1 netmask 255.0.0.0 ?...node02服器操作 ?[root@47aa8b55a61a /]# ifconfig ????//查看网卡信息 ?eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 ?inet 172.17.63.2 netmask 255.255.255.0 broadcast 172.17.63.255 ?... ?lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 ?inet 127.0.0.1 netmask 255.0.0.0 ?... ?[root@47aa8b55a61a /]# ping 172.17.49.2 ??//node02服务器中docker容器使用ping命令测试与node01服务器中docker是否可以通信 ?PING 172.17.49.2 (172.17.49.2) 56(84) bytes of data. ?64 bytes from 172.17.49.2: icmp_seq=1 ttl=62 time=0.406 ms ?64 bytes from 172.17.49.2: icmp_seq=2 ttl=62 time=0.377 ms ?64 bytes from 172.17.49.2: icmp_seq=3 ttl=62 time=0.389 ms ?64 bytes from 172.17.49.2: icmp_seq=4 ttl=62 time=0.356 ms ?^C ?--- 172.17.49.2 ping statistics --- ?4 packets transmitted, 4 received, 0% packet loss, time 3001ms ?rtt min/avg/max/mdev = 0.356/0.382/0.406/0.018 ms //成功通信看完这篇文章,你们学会部署Flannel组件的方法了吗?如果还想学到更多技能或想了解更多相关内容,欢迎关注Vecloud行业资讯频道,感谢各位的阅读! |
|