A pod running on a kubernetes cluster is easy to access from within the cluster, most simply, through the pod’s ip, or through the corresponding svc. However, outside the cluster, the pod ip of the flannel-based kubernetes cluster is not accessible from outside the cluster because it is an internal address.
To solve this problem, kubernetes provides several methods as follows.
hostNetwork: true
When hostNetwork is true, the container will use the network of the host node, so the container’s services can be accessed from outside the cluster as node-ip + port, as long as you know which node the container is running on.
After the pod is started, as follows, you can see that the ip address of the pod is the same as that of node optiplex-2, and the service on port 80 of the pod is requested with the ip address of node optiplex-2, and the http service of pod nginx is accessed.
|
|
The advantage of hostNetwork is that it uses the host’s network directly and Pods can access it as long as the host is accessible; however, the disadvantages are also obvious.
- Ease of use: Pods drift to other nodes and need to change ip addresses when accessing. workaroud does this by binding Pods to certain nodes and running keepalived on these nodes to drift the vip so that clients can use vip+port to access.
- Ease of use: Port conflicts may occur between Pods, causing Pods to fail to schedule successfully.
- Security: Pods can directly observe the network of the host.
hostPort
The effect of hostPort is similar to hostNetwork in that both can access the Pod’s services via the ip address of the node where the Pod is located + Pod Port.
After the pod is started, as follows, you can see that the ip address of the pod is the internal ip of the flannel, which is different from the ip of the host node; like hostNetwork, it can also be accessed via the node ip + pod port.
|
|
The principle of hostPort is different from hostNetwork, as follows. hostPort is actually a fullnat made by a series of iptables rules.
|
|
The advantages and disadvantages of hostPort are similar to those of hostNetwork because they both use the network resources of the host. one advantage of hostPort over hostNetwork is that hostPort does not need to provide network information about the host, but its performance is not as good as hostNetwork because it needs to be forwarded by iptables to reach the Pod.
NodePort
Unlike hostPort and hostNetwork, which are just configurations for Pods, NodePort is a service that uses the port number on the host node to access the Pod’s services from outside the cluster with any node’s ip + nodePort.
|
|
The nodePort in the svc configuration, i.e. the port number of the host when accessing the service. It can be specified in the configuration file (of course it cannot conflict with other nodePort type svc), or it can be left unconfigured and assigned by k8s.
After creating the above Pod and service, as follows, to view the information about the pod and svc, we can access the pod’s service via the host’s ip address + noePort.
|
|
The nodePort type of svc is implemented by kube-proxy via iptables, whose iptables rules have the same copy on all nodes.
LoadBalancer
LoadBalance is also a service. loadBalancer usually requires the support of external devices, such as load balancing devices on AWS, Azure and other public clouds.
In my environment, LoadBalancer is implemented through metalLB.
|
|
The only difference with nodePort is the type of service. after creation, as follows, check the status of the pod and svc, you can see that service nginx than nodePort type, more EXTERNAL-IP information, where 192.168.9.0 address that is metalLB for LoadBalancer type svc assigned by the external IP address; users can access nginx services from outside the cluster through this address.
|
|
Ingress
The nodePort and LoadBalancer types of service focus on layer four, while Ingrss focuses on layer seven.
In the kubernetes design, Ingress is just a concept and kubernetes does not provide its implementation directly; cluster administrators need to deploy ingress controllers; Ingress controllers can be implemented based on nginx, treafik, etc.
As above, Ingrss nginx is bound to service nginx, whose domain name is nginx.example.com.
Taking the nginx-based ingress controller as an example, after creating the above ingress, the Ingress controller will listen to service nginx, get its endpoints, and generate and update the configuration of the ingress controller.
When an out-of-cluster request arrives at the ingress controller, it forwards the http request with host nginx.example.com
to the endpoints, thus completing the seven-tier load balancing, and the clients outside the cluster can request the Pod’s services.
|
|
Of course, to be able to accept traffic from outside the cluster, the ingress controller itself needs to be deployed using hostPort or hostNetwork.
Pod IP global reachability
When the kubernetes network solution is calico or contiv, it is also possible to configure Pod IP global reachability for direct access outside of the cluster.
The principle is that the host establishes BGP neighbors with the uplink switch; when the Pod is running, the host BGP will publish a route to the uplink switch, which in turn completes a BGP exchange with the aggregation switch, thus directing traffic to the Pod.
Summary
All the above ways can realize the access to Pod service outside the cluster, you can choose according to the actual needs and environment.