kube-proxy

Kube-proxy or ELB “delaying” packets of HTTP requests

三世轮回 提交于 2021-02-17 21:55:21
问题 We're running a web API app on Kubernetes (1.9.3) in AWS (set with KOPS). The app is a Deployment and represented by a Service (type: LoadBalancer) which is actually an ELB (v1) on AWS. This generally works - except that some packets (fragments of HTTP requests) are "delayed" somewhere between the client <-> app container. (In both HTTP and HTTPS which terminates on ELB). From the node side : ( Note: Almost all packets on server-side arrive duplicated 3 times ) We use keep-alive so the tcp

Kubernets PODs running on different host, not able to establish TCP connection

十年热恋 提交于 2021-02-11 15:17:50
问题 I have Kubernets 1.20.1 cluster with single master and single worker configured with ipvs mode. Using calico CNI calico/cni:v3.16.1 . Cluster running on OS RHEL 8 kernel 4.18.0-240.10 with firewalld and selinux disabled. Running one netshoot pod ( 10.1.30.130 ) on master and another pod ( 10.3.65.132 ) in worker node. I can ping both pod, in both direction if run the nc command in web server mode, connection is not working. I tried to run nginx on both server, not able get http traffic one

Load distribution: All HTTP requests are getting redirected to a single pod in a k8 cluster

荒凉一梦 提交于 2020-12-15 06:40:14
问题 I have created a very simple spring boot application with only one REST service. This app is converted into a docker image ("springdockerimage:1") and deployed in the Kubernetes cluster with 3 replicas. Contents of my "Deployment" definition is as follows: apiVersion: apps/v1 kind: Deployment metadata: name: springapp labels: app: distributiondemo spec: selector: matchLabels: app: distributiondemo replicas: 3 template: metadata: labels: app: distributiondemo spec: containers: - name: spring

Does kube-router IPVS-least connection algorithm, does load balancing across pods in same node or different nodes?

瘦欲@ 提交于 2020-12-13 07:03:54
问题 The application which I am working on runs as a deployment in kubernetes cluster. Pods created for this deployment is spread across various nodes in the cluster. Our application can handle only one TCP connection at a time and would reject further connections. Currently we use kube-proxy (Iptables mode) for distributing loads across pods in various nodes, but pods are chosen in a random way and connections are getting dropped when its passed to a busy pod. Can I use Kube-router's least

Balancing traffic using least connection in Kubernetes

谁说胖子不能爱 提交于 2020-07-22 05:34:16
问题 I have a Kubernetes cluster with a deployment like the next one: The goal here is to deploy an application in multiple pods exposed through a ClusterIP service named my-app . The same deployment is made in multiple namespaces (A, B and C), changing slightly the config of the application. Then, in some nodes I have an HAProxy using hostNetwork to bind to the node ports. These HAProxy are exposed to my clients through a DNS pointing to them (my_app.com). When a client connects to my app, they

Enable IPVS Mode in Kube Proxy on a ready Kubernetes Local Cluster

99封情书 提交于 2020-07-03 09:25:10
问题 I want to enable the Kube-proxy mode to IPVS in the existing cluster. currently, it is running on IPtables. how can I change it to IPVS without affecting the existing workload? I have already installed all the required modules to enable it. Also, my cluster is installed using kubeadm but, I have not used the configuration file during installation. what should be the command exactly to enable the IPVS on my cluster. documentation1 documentation2 回答1: Edit the configmap kubectl edit configmap

IP Blacklisting in Istio

∥☆過路亽.° 提交于 2020-06-27 03:49:11
问题 The IP whitelisting/blacklisting example explained here https://kubernetes.io/docs/tutorials/services/source-ip/ uses source.ip attribute. However, in kubernetes (kubernetes cluster running on docker-for-desktop) source.ip returns the IP of kube-proxy. A suggested workaround is to use request.headers["X-Real-IP"] , however it doesn't seem to work and returns kube-proxy IP in docker-for-desktop in mac. https://github.com/istio/istio/issues/7328 mentions this issue and states: With a proxy that

k8s: forwarding from public VIP to clusterIP with iptables

*爱你&永不变心* 提交于 2020-04-06 22:28:13
问题 I'm trying to understand in depth how forwarding from publicly exposed load-balancer's layer-2 VIPs to services' cluster-IPs works. I've read a high-level overview how MetalLB does it and I've tried to replicate it manually by setting keepalived/ucarp VIP and iptables rules. I must be missing something however as it doesn't work ;-] Steps I took: created a cluster with kubeadm consisting of a master + 3 nodes running k8s-1.17.2 + calico-3.12 on libvirt/KVM VMs on a single computer. all VMs

k8s: forwarding from public VIP to clusterIP with iptables

天大地大妈咪最大 提交于 2020-04-06 22:26:53
问题 I'm trying to understand in depth how forwarding from publicly exposed load-balancer's layer-2 VIPs to services' cluster-IPs works. I've read a high-level overview how MetalLB does it and I've tried to replicate it manually by setting keepalived/ucarp VIP and iptables rules. I must be missing something however as it doesn't work ;-] Steps I took: created a cluster with kubeadm consisting of a master + 3 nodes running k8s-1.17.2 + calico-3.12 on libvirt/KVM VMs on a single computer. all VMs

NodePort services not available on all nodes

别说谁变了你拦得住时间么 提交于 2020-02-22 05:16:58
问题 I'm attempting to run a 3-node Kubernetes cluster. I have the cluster up and running sufficiently that I have services running on different nodes. Unfortunately, I don't seem to be able to get NodePort based services to work correctly (as I understand correctness anyway...). My issue is that any NodePort services I define are available externally only on the node where their pod is running, and my understanding is that they should be available externally on any node in the cluster. One