Some requests fails during autoscaling in kubernetes

被刻印的时光 ゝ 提交于 2020-01-23 07:50:26

问题


I set up a k8s cluster on microk8s and I ported my application to it. I also added a horizontal auto-scaler which adds pods based on the cpu load. The auto-scaler works fine and it adds pods when there is load beyond the target and when I remove the load after some time it will kill the pods.

The problem is I noticed at the exact same moments that the auto-scaler is creating new pods some of the requests fail:

POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code :  502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code :  200
POST Response Code :  502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code :  502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200
POST Response Code :  200

I like to know what is the reason for this and how I can fix it?

Update: I think it is better I give you more information regarding my set up:

The traffic is coming from outside of the cluster but both the k8s node and the program that generates the requests are on one machine so there is no network problem. There is a custom nginx component which doesn't do load balancing and just act as a reverse proxy and sends the traffic to respective services.

I ran another test which gave me more info. I ran the same benchmarking test but this time instead of sending the requests to the reverse proxy (nginx) I used the IP address of that specific service and I had no failed request while auto-scaler did its job and launched multiple pods. I am not sure if the problem is Nginx or k8s?


回答1:


About your question:

I am not sure if the problem is Nginx or k8s?

According to ingress-nginx docs:

The NGINX ingress controller does not uses Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT

So I believe the problem is in Nginx, which does not make use of all Kubernetes features(e.g. kube-proxy) and is sending requests to Pods before they are completely ready.
However apparently this problem was fixed in 0.23.0 (Feb/2019), so you should check your version.
Personally, I experience less issues after switching from Ingress-Nginx to Ambassador, which by default forwards requests to Services (so Kubernetes is in charge of load balancing and sending it to the proper Pod).




回答2:


When the new pods are spawned, Kubernetes immediately starts to redirect traffic to them. However, usually, it takes a bit of time for the pod to boot and become operational (ready).

To prevent this from happening, you can define a Readiness Probe for your pods. K8s will periodically call the pod on the readiness endpoint that you have provided, to determine if this pod is functional and ready to accept requests. K8s won't redirect traffic to the pod until the readiness endpoints returns a successful result depending on the type of probe (check "Types of Probes" section).



来源:https://stackoverflow.com/questions/56899429/some-requests-fails-during-autoscaling-in-kubernetes

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!