my web application is running as a Kubernetes pod behind an nginx reverse proxy for SSL. Both the proxy and my application use Kubernetes services for load balancing (as des
As of Kubernetes 1.1, there is an iptables-based kube-proxy that fixes this issue in some cases. It's disabled by default; see this post for instructions for how to enable it. In summary, do:
for node in $(kubectl get nodes -o name); do kubectl annotate $node net.beta.kubernetes.io/proxy-mode=iptables; done
In the case of Pod-to-Pod traffic, with the iptables kube-proxy you will now see the true source-IP at the destination pod.
However, if your Service is forwarding traffic from outside the cluster (e.g. a NodePort, LoadBalancer service), then we still have to replace (SNAT) the source-IP. This is because we are doing DNAT on the incoming traffic to route it to the the service Pod (potentially on another Node), so the DNATing Node needs to insert itself in the return path to be able to un-DNAT the response.
For kubernetes 1.7+ set service.spec.externalTrafficPolicy
to Local
will resolve it.
More information here: Kubernetes Docs
You can get kube-proxy out of the loop entirely in 2 ways:
Use an Ingress to configure your nginx to balance based on source ip and send traffic straight to your endpoint (https://github.com/kubernetes/contrib/tree/master/ingress/controllers#ingress-controllers)
Deploy the haproxy serviceloadbalancer(https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L51) and set the balance annotation on the serivce so it uses "source".
As of 1.5, if you are running in GCE (by extension GKE) or AWS, you simply need to add an annotation to your Service to make HTTP source preservation work.
...
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/external-traffic: OnlyLocal
...
It basically exposes the service directly via nodeports instead of providing a proxy--by exposing a health probe on each node, the load balancer can determine which nodes to route traffic to.
In 1.7, this config has become GA, so you can set "externalTrafficPolicy": "Local"
on your Service spec.
Click here to learn more
Right now, no.
Services use kube_proxy to distribute traffic to their backends. Kube-proxy uses iptables to route the service IP to a local port where it is listening, and then opens up a new connection to one of the backends. The internal IP you are seeing is the IP:port of kube-proxy running on one of your nodes.
An iptables only kube-proxy is in the works. That would preserve the original source IP.
For non-HTTP requests (HTTPS, gRPC, etc) this is scheduled to be supported in Kubernetes 1.4. See: https://github.com/kubernetes/features/issues/27