Normally, when a client in the cluster connects to the service, the Pod supporting the service can get the IP address of the client, but when the connection is received through the node port, the source IP address of the packet will change because of the Source Network Address Translation (SNAT) performed on the packet, and the Pod on the backend cannot see the actual client IP, which is a problem for some applications, for example, the request log of nginx cannot get the exact IP of the client access.
For example, in our application below.
|
|
You can see that the nginx service is automatically assigned a NodePort of 32761 after it is created directly.
|
|
We can see that the three Pods are assigned to three different nodes, and we can access our services through the NodePort of the master node. Since only the master node has access to the external network, we can see that the clientIP we get in the nginx pod log is 10.151.30.11, which is actually the master node’s internal IP, not the real IP address we expect from the browser.
This is because we do not have a corresponding Pod on the master node, so when we access the application through the master node, we must need additional network hops to reach the Pod on other nodes, and during the hopping process, we see the IP of the master node because of the SNAT of the packets. externalTrafficPolicy to reduce the number of network hops.
If externalTrafficPolicy=Local
is configured in the Service and the external connection is opened through the service’s node port, the Service will proxy to the locally running Pod. For example, if we set the field to update, we can’t access the application through the NodePort of the master node because there is no Pod running on the master node, so we need to make sure the load balancer forwards the connection to a node with at least one Pod.
However, it should be noted that there is a drawback to using this parameter. Normally, requests are distributed evenly across all Pods, but with this configuration, the situation may not be the same. For example, if we have 3 Pods running on two nodes, and if Node A is running one Pod and Node B is running two Pods, if the load balancer distributes the connections evenly between the two nodes, the Pod on Node A will receive 50% of all requests, but the two Pods on Node B will only receive 25% each.
With the addition of externalTrafficPolicy: Local
, the node receiving the request and the target Pod are both on the same node, so there is no additional network hopping (no SNAT is performed), so we can get the correct client IP, as shown below we fix the Pods to the master node.
|
|
After updating the service, and then accessing the service via NodePort, you can see that you are getting the correct client IP address.