This article mainly deploys v1.23.6
version of k8s native cluster based on docker
and calico
components on centos7 system, because the cluster is mainly used for own learning and testing, plus limited resources, not involved in high availability deployment for now.
1. Preparation
The machines are all 8C8G virtual machines with 100G hard disk.
IP |
Hostname |
10.31.88.1 |
tiny-calico-master-88-1.k8s.tcinternal |
10.31.88.11 |
tiny-calico-worker-88-11.k8s.tcinternal |
10.31.88.12 |
tiny-calico-worker-88-12.k8s.tcinternal |
10.88.64.0/18 |
podSubnet |
10.88.0.0/18 |
serviceSubnet |
1.2 Checking mac and product_uuid
All nodes in the same k8s cluster need to make sure that both mac
address and product_uuid
are unique, so you need to check the relevant information before starting cluster initialization.
1
2
3
4
|
ip link
ifconfig -a
sudo cat /sys/class/dmi/id/product_uuid
|
If the nodes in the k8s cluster have multiple NICs, ensure that each node can be accessed through the correct NIC interconnect.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
# 在root用户下面生成一个公用的key,并配置可以使用该key免密登录
su root
ssh-keygen
cd /root/.ssh/
cat id_rsa.pub >> authorized_keys
chmod 600 authorized_keys
cat >> ~/.ssh/config <<EOF
Host tiny-calico-master-88-1.k8s.tcinternal
HostName 10.31.88.1
User root
Port 22
IdentityFile ~/.ssh/id_rsa
Host tiny-calico-worker-88-11.k8s.tcinternal
HostName 10.31.88.11
User root
Port 22
IdentityFile ~/.ssh/id_rsa
Host tiny-calico-worker-88-12.k8s.tcinternal
HostName 10.31.88.12
User root
Port 22
IdentityFile ~/.ssh/id_rsa
EOF
|
1.4 Modify the hosts file
1
2
3
4
5
|
cat >> /etc/hosts <<EOF
10.31.88.1 tiny-calico-master-88-1 tiny-calico-master-88-1.k8s.tcinternal
10.31.88.11 tiny-calico-worker-88-11 tiny-calico-worker-88-11.k8s.tcinternal
10.31.88.12 tiny-calico-worker-88-12 tiny-calico-worker-88-12.k8s.tcinternal
EOF
|
1.5 Turn off swap memory
1
2
|
swapoff -a
sed -i '/swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
Here you can choose either ntp or chrony synchronization according to your custom, and the synchronized time source server can choose Aliyun’s ntp1.aliyun.com
or National Time Center’s ntp.ntsc.ac.cn
.
Using ntp for time synchronization
1
2
3
4
5
6
7
8
|
# 使用yum安装ntpdate工具
yum install ntpdate -y
# 使用国家时间中心的源同步时间
ntpdate ntp.ntsc.ac.cn
# 最后查看一下时间
hwclock
|
Using chrony for time synchronization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
# 使用yum安装chrony
yum install chrony -y
# 设置开机启动并开启chony并查看运行状态
systemctl enable chronyd.service
systemctl start chronyd.service
systemctl status chronyd.service
# 当然也可以自定义时间服务器
vim /etc/chrony.conf
# 修改前
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
# 修改后
$ grep server /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
server ntp.ntsc.ac.cn iburst
# 重启服务使配置文件生效
systemctl restart chronyd.service
# 查看chrony的ntp服务器状态
chronyc sourcestats -v
chronyc sources -v
|
1.7 Shutting down selinux
1
2
3
4
5
|
# 使用命令直接关闭
setenforce 0
# 也可以直接修改/etc/selinux/config文件
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
|
1.8 Configuring Firewalls
Communication and service exposure between k8s clusters requires the use of more ports, so for convenience, disable the firewall directly.
1
2
|
# centos7使用systemctl禁用默认的firewalld服务
systemctl disable firewalld.service
|
1.9 Configuring netfilter parameters
The main thing here is to configure the kernel to load br_netfilter
and iptables
to release ipv6
and ipv4
traffic to ensure that the containers in the cluster can communicate properly.
1
2
3
4
5
6
7
8
9
|
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
|
1.10 Turn off IPV6 (optional)
Although newer versions of k8s already support dual-stack networks, this cluster deployment process does not involve communication over IPv6 networks, so turn off IPv6 network support.
1
2
|
# 直接在内核中添加ipv6禁用参数
grubby --update-kernel=ALL --args=ipv6.disable=1
|
1.11 Configuring IPVS (optional)
IPVS is a component specifically designed to cope with load balancing scenarios. IPVS implementation in kube-proxy increases scalability by reducing the use of iptables. Instead of using PREROUTING in the iptables input chain, a dummy interface is created called kube-ipvs0, which enables IPVS to achieve more efficient forwarding performance than iptables when the load balancing configuration in a k8s cluster becomes large.
(Notes : use nf_conntrack
instead of nf_conntrack_ipv4
for Linux kernel 4.19 and later)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
# 在使用ipvs模式之前确保安装了ipset和ipvsadm
sudo yum install ipset ipvsadm -y
# 手动加载ipvs相关模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# 配置开机自动加载ipvs相关模块
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF
sudo sysctl --system
# 最好重启一遍系统确定是否生效
$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 15053 2
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
$ cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh
ip_vs_wrr
ip_vs_rr
ip_vs
nf_conntrack_ipv4
|
2. Install container runtime
2.1 Installing docker
The detailed official documentation can be found here. Since docker-shim
was removed in the just-released version 1.24, the installation of version ≥ 1.24
needs to pay attention to the container runtime
selection. Here we have installed a version lower than 1.24, so we continue to use docker.
1
2
3
4
5
6
|
# 安装必要的依赖组件并且导入docker官方提供的yum源
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 我们直接安装最新版本的docker
yum install docker-ce docker-ce-cli containerd.io
|
2.2 Configuring cgroup drivers
CentOS 7 uses systemd
to initialize the system and manage processes. Initializing processes generates and uses a root control group (cgroup
), and acts as a cgroup
manager. Systemd
is tightly integrated with cgroup
and will assign a cgroup
to each systemd
unit. We can also configure the container runtime
and kubelet
to use cgroupfs
. Using cgroupfs
with systemd
means that there will be two different cgroup managers
. When both cgroupfs and systemd are present on a system, it tends to become unstable, so it is best to change the settings so that the container runtime and kubelet use systemd
as the cgroup
driver to make the system more stable. For Docker, you need to set the native.cgroupdriver=systemd
parameter.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
# 最后检查一下Cgroup Driver是否为systemd
$ docker info | grep systemd
Cgroup Driver: systemd
|
2.3 About kubelet’s cgroup driver
The official k8s documentation (https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/) describes how to set the kubelet’s cgroup driver
. Note in particular that starting with version 1.22, if the kubelet cgroup driver is not set manually, it will be set to systemd by default.
A simpler way to specify the cgroup driver
for a kubelet is to add the cgroupDriver
field to kubeadm-config.yaml
1
2
3
4
5
6
7
8
|
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
|
We can check the configmaps directly to see the kubeadm-config configuration of the cluster after initialization.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
$ kubectl describe configmaps kubeadm-config -n kube-system
Name: kubeadm-config
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
ClusterConfiguration:
----
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.6
networking:
dnsDomain: cali-cluster.tclocal
serviceSubnet: 10.88.0.0/18
scheduler: {}
BinaryData
====
Events: <none>
|
Of course, since we need to install a version higher than 1.22.0 and use systemd, we don’t need to repeat the configuration.
3. Install the kube component
The corresponding official documentation can be found here
The kube components include kubeadm
, kubelet
and kubectl
, the specific functions and roles of the three are as follows.
kubeadm
: the command used to initialize the cluster.
kubelet
: used on each node in the cluster to start Pods, containers, etc.
kubectl
: Command line tool used to communicate with the cluster.
The following issues need to be noted.
kubeadm
will not help us manage kubelet
and kubectl
, and the other two are the same, which means that the three are independent of each other and there is no case of who manages who.
- The version of
kubelet
must be less than or equal to the version of API-server
, otherwise compatibility issues are likely to arise.
kubectl
does not need to be installed on every node in the cluster, nor does it have to be installed on a node in the cluster, it can be installed separately on top of your own local machine environment, and then with the kubeconfig
file you can use the kubectl
command to remotely manage the corresponding k8s cluster.
The installation of CentOS7 is relatively simple, we can use the official yum
source provided directly. Note that you need to set the state of selinux
here, but we have already turned off selinux, so we will skip this step here.
The installation of CentOS 7 is relatively simple, we can just use the official yum
source. Note that you need to set the state of selinux
here, but we have already turned off selinux, so we will skip this step.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
|
# 直接导入谷歌官方的yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# 当然如果连不上谷歌的源,可以考虑使用国内的阿里镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 接下来直接安装三件套即可
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 如果网络环境不好出现gpgcheck验证失败导致无法正常读取yum源,可以考虑关闭该yum源的repo_gpgcheck
sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g' /etc/yum.repos.d/kubernetes.repo
# 或者在安装的时候禁用gpgcheck
sudo yum install -y kubelet kubeadm kubectl --nogpgcheck --disableexcludes=kubernetes
# 如果想要安装特定版本,可以使用这个命令查看相关版本的信息
sudo yum list --nogpgcheck kubelet kubeadm kubectl --showduplicates --disableexcludes=kubernetes
# 这里我们为了保留使用docker-shim,因此我们按照1.24.0版本的前一个版本1.23.6
sudo yum install -y kubelet-1.23.6-0 kubeadm-1.23.6-0 kubectl-1.23.6-0 --nogpgcheck --disableexcludes=kubernetes
# 安装完成后配置开机自启kubelet
sudo systemctl enable --now kubelet
|
4. Initialize the cluster
4.1 Writing the configuration file
After all the nodes in the cluster have performed the above three operations, we can start creating the k8s cluster. Since we are not involved in a high availability deployment this time, we can operate directly on top of our target master node during initialization.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
# 我们先使用kubeadm命令查看一下主要的几个镜像版本
# 因为我们此前指定安装了旧的1.23.6版本,这里的apiserver镜像版本也会随之回滚
$ kubeadm config images list
I0506 11:24:17.061315 16055 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
# 为了方便编辑和管理,我们还是把初始化参数导出成配置文件
$ kubeadm config print init-defaults > kubeadm-calico.conf
|
- Considering that in most cases domestic networks cannot use Google’s k8s.gcr.io image source, we can directly modify the
imageRepository
parameter in the configuration file to be Ali’s image source
kubernetesVersion
field to specify the version of k8s we want to install
localAPIEndpoint
parameter needs to be modified to the IP and port of our master node, the apiserver address of the k8s cluster after initialization is this
serviceSubnet
and dnsDomain
parameters can be changed by default, here I changed them according to my needs
- The
name
parameter in nodeRegistration
is changed to hostname
of the corresponding master node
- The new configuration block uses ipvs, which can be found in the official documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.31.88.1
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: tiny-calico-master-88-1.k8s.tcinternal
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.6
networking:
dnsDomain: cali-cluster.tclocal
serviceSubnet: 10.88.0.0/18
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
|
4.2 Initialize the cluster
At this point we check the mirror version in the corresponding configuration file, we will find that it has become the version corresponding to the AliCloud mirror source.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
# 查看一下对应的镜像版本,确定配置文件是否生效
$ kubeadm config images list --config kubeadm-calico.conf
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6
# 确认没问题之后我们直接拉取镜像
$ kubeadm config images pull --config kubeadm-calico.conf
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.6
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
# 初始化
$ kubeadm init --config kubeadm-calico.conf
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
...此处略去一堆输出...
|
When we see this output below, our cluster has been initialized successfully.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.31.88.1:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae
|
4.3 Configuring kubeconfig
After successful initialization, we can’t view the k8s cluster information right away. We need to configure the kubeconfig parameters in order to use kubectl to connect to the apiserver and read the cluster information properly.
1
2
3
4
5
6
7
8
9
10
|
# 对于非root用户,可以这样操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 如果是root用户,可以直接导入环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
# 添加kubectl的自动补全功能
echo "source <(kubectl completion bash)" >> ~/.bashrc
|
As we mentioned earlier, kubectl
does not have to be installed in the cluster. In fact, you can install kubectl
on any machine that can connect to the apiserver
and configure kubeconfig
according to the steps to use the kubectl
command line to manage the corresponding k8s cluster.
Once the configuration is complete, we can then execute the relevant commands to view the information about the cluster.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
$ kubectl cluster-info
Kubernetes control plane is running at https://10.31.88.1:6443
CoreDNS is running at https://10.31.88.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tiny-calico-master-88-1.k8s.tcinternal NotReady control-plane,master 4m15s v1.23.6 10.31.88.1 <none> CentOS Linux 7 (Core) 3.10.0-1160.62.1.el7.x86_64 docker://20.10.14
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-6d8c4cb4d-r8r9q 0/1 Pending 0 4m20s <none> <none> <none> <none>
kube-system coredns-6d8c4cb4d-ztq6w 0/1 Pending 0 4m20s <none> <none> <none> <none>
kube-system etcd-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 4m25s 10.31.88.1 tiny-calico-master-88-1.k8s.tcinternal <none> <none>
kube-system kube-apiserver-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 4m26s 10.31.88.1 tiny-calico-master-88-1.k8s.tcinternal <none> <none>
kube-system kube-controller-manager-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 4m27s 10.31.88.1 tiny-calico-master-88-1.k8s.tcinternal <none> <none>
kube-system kube-proxy-v6cg9 1/1 Running 0 4m20s 10.31.88.1 tiny-calico-master-88-1.k8s.tcinternal <none> <none>
kube-system kube-scheduler-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 4m25s 10.31.88.1 tiny-calico-master-88-1.k8s.tcinternal <none> <none>
|
4.4 Adding worker nodes
At this point we still need to continue to add the remaining two nodes as worker nodes to run the load, and directly run the command output from the previous successful cluster initialization on the remaining nodes to successfully join the cluster.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
$ kubeadm join 10.31.88.1:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
|
It doesn’t matter if we accidentally don’t save the output of successful initialization, we can use the kubectl tool to view or generate the token.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
# 查看现有的token列表
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef 23h 2022-05-07T05:19:08Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
# 如果token已经失效,那就再创建一个新的token
$ kubeadm token create
e31cv1.lbtrzwp6mzon78ue
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef 23h 2022-05-07T05:19:08Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
e31cv1.lbtrzwp6mzon78ue 23h 2022-05-07T05:51:40Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
# 如果找不到--discovery-token-ca-cert-hash参数,则可以在master节点上使用openssl工具来获取
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
a4189d36d164a865be540d48fcd10ff13e2f90ed6e901201b6ea2baf96dae0ae
|
After adding the nodes, we can see that there are two more nodes in the cluster, but the state of the nodes is still NotReady
, so we need to deploy CNI.
1
2
3
4
5
|
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
tiny-calico-master-88-1.k8s.tcinternal NotReady control-plane,master 20m v1.23.6
tiny-calico-worker-88-11.k8s.tcinternal NotReady <none> 105s v1.23.6
tiny-calico-worker-88-12.k8s.tcinternal NotReady <none> 35s v1.23.6
|
5. Install CNI
5.1 Writing manifest files
The installation of calico is also relatively simple, the official provides a variety of installation methods, we use yaml
here (custom manifests) for installation, and use etcd
as the datastore
.
1
2
|
# 我们先把官方的yaml模板下载下来,然后对关键字段逐个修改
curl https://projectcalico.docs.tigera.io/manifests/calico-etcd.yaml -O
|
For the calico-etcd.yaml
file, we need to modify some parameters to adapt to our cluster.
-
CALICO_IPV4POOL_CIDR
parameter, which configures the segment of the pod, here we use the previously planned 10.88.64.0/18
; CALICO_IPV4POOL_BLOCK_SIZE
parameter, which configures the size of the allocated subnet, the default is 26
1
2
3
4
5
6
7
|
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.88.64.0/18"
- name: CALICO_IPV4POOL_BLOCK_SIZE
value: "26"
|
-
CALICO_IPV4POOL_IPIP
parameter, control whether to enable ip-ip mode, the default is Always
, since our nodes are all in the same layer 2 network, here modify to Never
or CrossSubnet
can be.
Where Never
means that ip-ip mode is not enabled, and CrossSubnet
means that ip-ip mode is enabled only when it crosses the subnet
1
2
3
|
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Never"
|
-
The etcd_endpoints
variable in ConfigMap
configures the connection port and address of etcd
, for security we enable TLS
authentication here, of course, if you don’t want to configure the certificate, you can also not use TLS, then these three fields are left empty without modification
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
# etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
# etcd_ca: "" # "/calico-secrets/etcd-ca"
# etcd_cert: "" # "/calico-secrets/etcd-cert"
# etcd_key: "" # "/calico-secrets/etcd-key"
etcd_endpoints: "https://10.31.88.1:2379"
etcd_ca: "/etc/kubernetes/pki/etcd/ca.crt"
etcd_cert: "/etc/kubernetes/pki/etcd/server.crt"
etcd_key: "/etc/kubernetes/pki/etcd/server.key"
|
-
The data
field under name: calico-etcd-secrets
in Secret
needs to be converted to base64 encoding using the command cat <file> | base64 -w 0
for the three certificates above
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
# Populate the following with etcd TLS configuration if desired, but leave blank if
# not using TLS for etcd.
# The keys below should be uncommented and the values populated with the base64
# encoded contents of each file that would be associated with the TLS data.
# Example command for encoding a file contents: cat <file> | base64 -w 0
etcd-key: LS0tLS1CRUdJTi......tLS0tCg==
etcd-cert: LS0tLS1CRUdJT......tLS0tLQo=
etcd-ca: LS0tLS1CRUdJTiB......FLS0tLS0K
|
5.2 Deploying calico
Once the modifications are complete we can deploy it directly
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
$ kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
# 查看pod是否正常运行
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5c4bd49f9b-6b2gr 1/1 Running 5 (3m18s ago) 6m18s
kube-system calico-node-bgsfs 1/1 Running 5 (2m55s ago) 6m18s
kube-system calico-node-tr88g 1/1 Running 5 (3m19s ago) 6m18s
kube-system calico-node-w59pc 1/1 Running 5 (2m36s ago) 6m18s
kube-system coredns-6d8c4cb4d-r8r9q 1/1 Running 0 3h8m
kube-system coredns-6d8c4cb4d-ztq6w 1/1 Running 0 3h8m
kube-system etcd-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 3h8m
kube-system kube-apiserver-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 3h8m
kube-system kube-controller-manager-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 3h8m
kube-system kube-proxy-n65sb 1/1 Running 0 169m
kube-system kube-proxy-qmxhp 1/1 Running 0 168m
kube-system kube-proxy-v6cg9 1/1 Running 0 3h8m
kube-system kube-scheduler-tiny-calico-master-88-1.k8s.tcinternal 1/1 Running 0 3h8m
# 查看calico-kube-controllers的pod日志是否有报错
$ kubectl logs -f calico-kube-controllers-5c4bd49f9b-6b2gr -n kube-system
|
5.3 pod installation calicoctl
calicoctl is a command line tool used to view and manage calico, positioned somewhat similar to the calico version of kubectl, because we used etcd as the calico datastore earlier, here directly select deploy as pod in k8s clustercalicoctl
is a much simpler way.
- The version of
calicoctl
should ideally match the deployed calico, here both are v3.22.2
- The etcd configuration of
calicoctl
should be the same as the deployed calico, because the previous deployment of calico has TLS enabled for etcd, so here we also need to modify the yaml file to enable TLS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
|
# 为了方便后期管理,我们先把calicoctl.yaml下载到本地再进行部署
$ wget https://projectcalico.docs.tigera.io/manifests/calicoctl-etcd.yaml
$ cat calicoctl-etcd.yaml
# Calico Version v3.22.2
# https://projectcalico.docs.tigera.io/releases#v3.22.2
# This manifest includes the following component versions:
# calico/ctl:v3.22.2
apiVersion: v1
kind: Pod
metadata:
name: calicoctl
namespace: kube-system
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
containers:
- name: calicoctl
image: calico/ctl:v3.22.2
command:
- /calicoctl
args:
- version
- --poll=1m
env:
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# If you're using TLS enabled etcd uncomment the following.
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
volumeMounts:
- mountPath: /calico-secrets
name: etcd-certs
volumes:
# If you're using TLS enabled etcd uncomment the following.
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
|
After the modifications are done we deploy it directly and use it.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
$ kubectl apply -f calicoctl-etcd.yaml
pod/calicoctl created
# 创建完成后我们查看calicoctl的运行状态
$ kubectl get pods -A | grep calicoctl
kube-system calicoctl 1/1 Running 0 9s
# 检验一下是否能够正常工作
$ kubectl exec -ti -n kube-system calicoctl -- /calicoctl get nodes
NAME
tiny-calico-master-88-1.k8s.tcinternal
tiny-calico-worker-88-11.k8s.tcinternal
tiny-calico-worker-88-12.k8s.tcinternal
$ kubectl exec -ti -n kube-system calicoctl -- /calicoctl get profiles -o wide
NAME LABELS
projectcalico-default-allow
kns.default pcns.kubernetes.io/metadata.name=default,pcns.projectcalico.org/name=default
kns.kube-node-lease pcns.kubernetes.io/metadata.name=kube-node-lease,pcns.projectcalico.org/name=kube-node-lease
kns.kube-public pcns.kubernetes.io/metadata.name=kube-public,pcns.projectcalico.org/name=kube-public
kns.kube-system pcns.kubernetes.io/metadata.name=kube-system,pcns.projectcalico.org/name=kube-system
...此处略去一堆输出...
# 查看ipam的分配情况
$ calicoctl ipam show
+----------+---------------+-----------+------------+--------------+
| GROUPING | CIDR | IPS TOTAL | IPS IN USE | IPS FREE |
+----------+---------------+-----------+------------+--------------+
| IP Pool | 10.88.64.0/18 | 16384 | 2 (0%) | 16382 (100%) |
+----------+---------------+-----------+------------+--------------+
# 为了方便可以在bashrc中设置alias
cat >> ~/.bashrc <<EOF
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
EOF
|
The full version of the calicoctl command can be found in the official documentation.
5.4 binary installation of calicoctl
Deploying calicoctl
using pod is easy, but one problem is that you can’t use the calicoctl node
command, which requires access to part of the host’s filesystem. So here we binary deploy a calicoctl
again. Available.
1
2
3
4
|
# 直接下线二进制文件即可使用
$ cd /usr/local/bin/
$ curl -L https://github.com/projectcalico/calico/releases/download/v3.22.2/calicoctl-linux-amd64 -o calicoctl
$ chmod +x ./calicoctl
|
The binary calicoctl will read the configuration file first, and will only read the environment variables when it can’t find the configuration file, here we directly configure /etc/calico/calicoctl.cfg
, and note that the etcd certificate is directly consistent with the certificate file used when deploying calico earlier.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
|
# 配置calicoctl的配置文件
$ mkdir /etc/calico
$ cat /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
datastoreType: etcdv3
etcdEndpoints: "https://10.31.88.1:2379"
etcdCACert: |
-----BEGIN CERTIFICATE-----
MIIC9TCCAd2gAwIBAgIBADANBgkqhkiG9w0BAQsFADASMRAwDgYDVQQDEwdldGNk
LWNhMB4XDTIyMDUwNjA1MTg1OVoXDTMyMDUwMzA1MTg1OVowEjEQMA4GA1UEAxMH
ZXRjZC1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANFFqq4Mk3DE
6UW581xnZPFrHqQWlGr/KptEywKH56Bp24OAnDIAkSz7KAMrJzL+OiVsj9YJV59F
9qH/YzU+bppctDnfk1yCuavkcXgLSd9O6EBhM2LkGtF9AdWMnFw9ui2jNhFC/QXj
zCvq0I1c9o9gulbFmSHwIw2GLQd7ogO+PpfLsubRscJdKkCUWVFV0mb8opccmXoF
vXynRX0VW3wpN+v66bD+HTdMSNK1JljfBngh9LAkibjUx7bMrHvu/GOalNCSWrtG
lss/hhWkzwV7Y7AIXgvxxcmDdfswe5lUYLvW2CP4e+tXfB3i2wg10fErc8z63lix
v9BWkIIalScCAwEAAaNWMFQwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMB
Af8wHQYDVR0OBBYEFH49PpnJYxze8aq0PVwgpY4Fo6djMBIGA1UdEQQLMAmCB2V0
Y2QtY2EwDQYJKoZIhvcNAQELBQADggEBAAGL6KwN80YEK6gZcL+7RI9bkMKk7UWW
V48154CgN8w9GKvNTm4l0tZKvsWCnR61hiJtLQcG0S8HYHAvL1DBjOXw11bNilLy
vaVM+wqOOIxPsXLU//F46z3V9z1uV0v/yLLlg320c0wtG+OLZZIn8O+yUhtOHM09
K0JSAF2/KhtNxhrc0owCTOzS+DKsb0w1SzQmS0t/tflyLfc3oJZ/2V4Tqd72j7iI
cDBa36lGqtUBf8MXu+Xza0cdhy/f19AqkeM2fe+/DrbzR4zDVmZ7l4dqYGLbKHYo
XaLn8bSToYQq4dlA/oAlyyH0ekB5v0DyYiHwlqgZgiu4qcR3Gw8azVk=
-----END CERTIFICATE-----
etcdCert: |
-----BEGIN CERTIFICATE-----
MIIDgzCCAmugAwIBAgIIePiBSOdMGwcwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
AxMHZXRjZC1jYTAeFw0yMjA1MDYwNTE4NTlaFw0yMzA1MDYwNTE4NTlaMDExLzAt
BgNVBAMTJnRpbnktY2FsaWNvLW1hc3Rlci04OC0xLms4cy50Y2ludGVybmFsMIIB
IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqZM/jBrdXLR3ctee7LVJhGSA
4usg/JQXGyOAd52OkkOLYwn3fvwqeo0Z0cX0q4mqaF0cnrPYc4eExX/3fJpF3Fxy
D6vdpEZ/FrnzCAkibEYtK/UVhTKuV7n/VdbjFPGl8CpppuGVs6o+4NFZxffW7em0
8m/FK/7SDkV2qXCyG94kOaUCeDEgdBKE3cPCZQ4maFuwXi08bYs2CiTfbfa4dsT5
3yzaoQVX9BaBqE9IGmsHDFuxp1X8gkJXs+7wwHQX39o1oXmci6T4IVxVHA5GRbTv
pCDG5Wye7QqKgnxO1KRF42FKs1Nif7UJ0iR35Ydpa7cat7Fr0M7l+rZLCDTJgwID
AQABo4G9MIG6MA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYI
KwYBBQUHAwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBR+PT6ZyWMc3vGqtD1c
IKWOBaOnYzBaBgNVHREEUzBRgglsb2NhbGhvc3SCJnRpbnktY2FsaWNvLW1hc3Rl
ci04OC0xLms4cy50Y2ludGVybmFshwQKH1gBhwR/AAABhxAAAAAAAAAAAAAAAAAA
AAABMA0GCSqGSIb3DQEBCwUAA4IBAQC+pyH14/+US5Svz04Vi8QIduY/DVx1HOQq
hfrIZKOZCH2iKU7fZ4o9QpQZh7D9B8hgpXM6dNuFpd98c0MVPr+LesShu4BHVjHl
gPvUWEVB2XD5x51HqnMV2OkhMKooyAUIzI0P0YKN29SFEyJGD1XDu4UtqvBADqf7
COvAuqj4VbRgF/iQwNstjqZ47rSzvyp6rIwqFoHRP+Zi+8KL1qmozGjI3+H+TZFM
Gv3b5DRx2pmfY+kGVLO5bjl3zxylRPjCDHaRlQUWiOYSWS8OHYRCBZuSLvW4tht0
JjWjUAh4hF8+3lyNrfx8moz7tfm5SG2q01pO1vjkhrhxhINAwaac
-----END CERTIFICATE-----
etcdKey: |
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAqZM/jBrdXLR3ctee7LVJhGSA4usg/JQXGyOAd52OkkOLYwn3
fvwqeo0Z0cX0q4mqaF0cnrPYc4eExX/3fJpF3FxyD6vdpEZ/FrnzCAkibEYtK/UV
hTKuV7n/VdbjFPGl8CpppuGVs6o+4NFZxffW7em08m/FK/7SDkV2qXCyG94kOaUC
eDEgdBKE3cPCZQ4maFuwXi08bYs2CiTfbfa4dsT53yzaoQVX9BaBqE9IGmsHDFux
p1X8gkJXs+7wwHQX39o1oXmci6T4IVxVHA5GRbTvpCDG5Wye7QqKgnxO1KRF42FK
s1Nif7UJ0iR35Ydpa7cat7Fr0M7l+rZLCDTJgwIDAQABAoIBAE1gMw7q8zbp4dc1
K/82eWU/ts/UGikmKaTofiYWboeu6ls2oQgAaCGjYLSnbw0Ws/sLAZQo3AtbOuoj
ifoBKv9x71nXQjtDL5pfHtX71QkyvEniev9cMNE2vZudgeB8owsDT1ImfPiOJkLP
Q/dhL2E/0qEM/xskGxUH/S0zjxHHfPZZsYODhkVPWc6Z+XEDll48fRCFn4/48FTN
9GbRvo7dv34EHmNYA20K4DMHbZUdrPqSZpKWzAPJXnDlgZbpvUeAYOJxqZHQtCm1
zbSOyM1Ql6K0Ayro0L5GAzap+0yGuk79OWiPnEsdPneVsATKG7dT7RZIL/INrOqQ
0wjUmQECgYEA02OHdT1K5Au6wtiTqKD99WweltnvFd4C/Z3dobEj8M8qN6uiKCca
PievWahnxAlJEah3RiOgtarwA+0E/Jgsw99Qutp5BR/XdD3llTNczkPkg/RkWpve
2f/4DlZQrxuIem7UNLl+5BacfmF691DQQoX2RoIkvQxYJGTUNXvrSUkCgYEAzVyz
mvN+dvSwzAlm0gkfVP5Ez3DFESUrWd0FR2v1HR6qHQy/dkgkkic6zRGCJtGeT5V7
N0kbVSHsz+wi6aQkFy0Sp0TbgZzjPhSwNtk+2JsBRvMp0CYczgrfyvWuAQ3gbXGc
N8IkcZSSOv8TuigCnnYf2Xaz8LM50AivScnb6GsCgYEAyq4ScgnLpa3NawbnRPbf
qRH6nl7lC01sBqn3mBHVSQ4JB4msF92uHsxEJ639mAvjIGgrvHdqnuT/7nOypVJv
EXsr14ykHpKyLQUv/Idbw3V7RD3ufqYW3WS8/VorUEoQ6HsdQlRc4ur/L3ndwgWd
OTtir6YW/aA5XuPCSGnBZekCgYB6VtlgW+Jg91BDnO41/d0+guN3ONUNa7kxpau5
aqTxHg11lNySmFPBBcHP3LhOa94FxyVKQDEaPEWZcDE0QuaFMALGxwyFYHM3zpdT
dYQtAdp26/Fi4PGUBYJgpI9ubVffmyjXRr7zMvESWFbmNWOqBvDeWgrEP+EW/7V9
HdX11QKBgE1czchlibgQ/bhAl8BatKRr1X/UHvblWhmyApudOfFeGOILR6u/lWvY
SS+Rg0y8nnZ4hTRSXbd/sSEsUJcSmoBc1TivWzl32eVuqe9CcrUZY0JSLtoj1KiP
adRcCZtVDETXbW326Hvgz+MnqrIgzx+Zgy4tNtoAAbTv0q83j45I
-----END RSA PRIVATE KEY-----
|
After the configuration is complete we check the results.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
$ calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.31.88.11 | node-to-node mesh | up | 08:26:30 | Established |
| 10.31.88.12 | node-to-node mesh | up | 08:26:30 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
$ calicoctl get nodes
NAME
tiny-calico-master-88-1.k8s.tcinternal
tiny-calico-worker-88-11.k8s.tcinternal
tiny-calico-worker-88-12.k8s.tcinternal
$ calicoctl ipam show
+----------+---------------+-----------+------------+--------------+
| GROUPING | CIDR | IPS TOTAL | IPS IN USE | IPS FREE |
+----------+---------------+-----------+------------+--------------+
| IP Pool | 10.88.64.0/18 | 16384 | 2 (0%) | 16382 (100%) |
+----------+---------------+-----------+------------+--------------+
|
6. Deploy test cases
After the cluster is deployed we deploy an nginx in the k8s cluster to test if it works. First we create a namespace named nginx-quic
, then we create a deployment
named nginx-quic-deployment
in this namespace to deploy pods, and finally we create a service
to expose the service, here we first Here we first use nodeport
to expose the port for testing purposes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
$ cat nginx-quic.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-quic-deployment
namespace: nginx-quic
spec:
selector:
matchLabels:
app: nginx-quic
replicas: 4
template:
metadata:
labels:
app: nginx-quic
spec:
containers:
- name: nginx-quic
image: tinychen777/nginx-quic:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-quic-service
namespace: nginx-quic
spec:
selector:
app: nginx-quic
ports:
- protocol: TCP
port: 8080 # match for service access port
targetPort: 80 # match for pod access port
nodePort: 30088 # match for external access port
type: NodePort
|
Once the deployment is complete we check the status directly.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
|
# 直接部署
$ kubectl apply -f nginx-quic.yaml
namespace/nginx-quic created
deployment.apps/nginx-quic-deployment created
service/nginx-quic-service created
# 查看deployment的运行状态
$ kubectl get deployment -o wide -n nginx-quic
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-quic-deployment 4/4 4 4 55s nginx-quic tinychen777/nginx-quic:latest app=nginx-quic
# 查看service的运行状态
$ kubectl get service -o wide -n nginx-quic
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-quic-service NodePort 10.88.52.168 <none> 8080:30088/TCP 66s app=nginx-quic
# 查看pod的运行状态
$ kubectl get pods -o wide -n nginx-quic
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-quic-deployment-7457f4d579-24q9z 1/1 Running 0 75s 10.88.120.72 tiny-calico-worker-88-12.k8s.tcinternal <none> <none>
nginx-quic-deployment-7457f4d579-4svv9 1/1 Running 0 75s 10.88.84.68 tiny-calico-worker-88-11.k8s.tcinternal <none> <none>
nginx-quic-deployment-7457f4d579-btrjj 1/1 Running 0 75s 10.88.120.71 tiny-calico-worker-88-12.k8s.tcinternal <none> <none>
nginx-quic-deployment-7457f4d579-lvh6x 1/1 Running 0 75s 10.88.84.69 tiny-calico-worker-88-11.k8s.tcinternal <none> <none>
# 查看IPVS规则
$ ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.17.0.1:30088 rr
-> 10.88.84.68:80 Masq 1 0 0
-> 10.88.84.69:80 Masq 1 0 0
-> 10.88.120.71:80 Masq 1 0 0
-> 10.88.120.72:80 Masq 1 0 0
TCP 10.31.88.1:30088 rr
-> 10.88.84.68:80 Masq 1 0 0
-> 10.88.84.69:80 Masq 1 0 0
-> 10.88.120.71:80 Masq 1 0 0
-> 10.88.120.72:80 Masq 1 0 0
TCP 10.88.52.168:8080 rr
-> 10.88.84.68:80 Masq 1 0 0
-> 10.88.84.69:80 Masq 1 0 0
-> 10.88.120.71:80 Masq 1 0 0
-> 10.88.120.72:80 Masq 1 0 0
|
Finally we test that this image of nginx-quic returns by default the IP and port of the user request obtained in the nginx container.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
# 首先我们在集群内进行测试
# 直接访问pod
$ curl 10.88.84.68:80
10.31.88.1:34612
# 直接访问service的ClusterIP,这时请求会被转发到pod中
$ curl 10.88.52.168:8080
10.31.88.1:58978
# 直接访问nodeport,这时请求会被转发到pod中,不会经过ClusterIP
$ curl 10.31.88.1:30088
10.31.88.1:56595
# 接着我们在集群外进行测试
# 直接访问三个节点的nodeport,这时请求会被转发到pod中,不会经过ClusterIP
# 由于externalTrafficPolicy默认为Cluster,因此nginx拿到的IP就是我们访问的节点的IP,而非客户端IP
$ curl 10.31.88.1:30088
10.31.88.1:27851
$ curl 10.31.88.11:30088
10.31.88.11:16540
$ curl 10.31.88.12:30088
10.31.88.12:5767
|