kubernetes-health-check

Ignite ReadinessProbe

本秂侑毒 提交于 2021-02-04 05:11:07
问题 Deploying an ignite cluster within Kubernetes, I cam across an issue that prevents cluster members from joining the group. If I use a readinessProbe and a livenessProbe, even with a delay as low as 10 seconds, they nodes never join each other. If I remove those probes, they find each other just fine. So, my question is: can you use these probes to monitor node health, and if so, what are appropriate settings. On top of that, what would be good, fast health checks for Ignite, anyway? 回答1:

Ignite ReadinessProbe

强颜欢笑 提交于 2021-02-04 05:09:51
问题 Deploying an ignite cluster within Kubernetes, I cam across an issue that prevents cluster members from joining the group. If I use a readinessProbe and a livenessProbe, even with a delay as low as 10 seconds, they nodes never join each other. If I remove those probes, they find each other just fine. So, my question is: can you use these probes to monitor node health, and if so, what are appropriate settings. On top of that, what would be good, fast health checks for Ignite, anyway? 回答1:

Ignite ReadinessProbe

百般思念 提交于 2021-02-04 05:09:33
问题 Deploying an ignite cluster within Kubernetes, I cam across an issue that prevents cluster members from joining the group. If I use a readinessProbe and a livenessProbe, even with a delay as low as 10 seconds, they nodes never join each other. If I remove those probes, they find each other just fine. So, my question is: can you use these probes to monitor node health, and if so, what are appropriate settings. On top of that, what would be good, fast health checks for Ignite, anyway? 回答1:

Understanding healthchecks for backend services on GKE when using ingress

让人想犯罪 __ 提交于 2021-01-29 07:07:51
问题 I am using the following code in statefulset.yml apiVersion: apps/v1 kind: StatefulSet metadata: name: geth namespace: prod spec: serviceName: geth-service replicas: 2 selector: matchLabels: app: geth-node template: metadata: labels: app: geth-node spec: containers: - name: geth-node image: <My image> imagePullPolicy: Always livenessProbe: httpGet: path: / port: 8545 initialDelaySeconds: 20 #wait this period after staring fist time periodSeconds: 15 # polling interval timeoutSeconds: 5 # wish

Can you tell kubernetes to start one pod before another?

[亡魂溺海] 提交于 2021-01-27 20:33:36
问题 Can I add some config so that my daemon pods start before other pods can be scheduled or nodes are designated as ready? Adding post edit: These are 2 different pods altogether, the daemonset is a downstream dependency to any pods that might get scheduled on the host. 回答1: There's no such a thing as Pod hierarchy in Kubernetes between multiple separate types of pods. Meaning belonging to different Deployments, Statefulsets, Daemonsets, etc. In other words, there is no notion of a master pod

Kubernetes - Rolling update killing off old pod without bringing up new one

不羁的心 提交于 2020-12-05 04:11:09
问题 I am currently using Deployments to manage my pods in my K8S cluster. Some of my deployments require 2 pods/replicas, some require 3 pods/replicas and some of them require just 1 pod/replica. The issue Im having is the one with one pod/replica. My YAML file is : apiVersion: extensions/v1beta1 kind: Deployment metadata: name: user-management-backend-deployment spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 2 selector: matchLabels: name: user

What happens when a service receives a request but has no ready pods?

混江龙づ霸主 提交于 2020-07-20 04:15:10
问题 Having a kubernetes service (of type ClusterIP ) connected to a set of pod s, but none of them are currently ready - what will happen to the request? Will it: fail eagerly timeout wait until a ready pod is available (or forever, whichever is earlier) something else? 回答1: It will time out. Kube-proxy pulls out the IP addresses from healthy pods and sets as endpoints of the service (backends). Also, note that all kube-proxy does is to re-write the iptables when you create, delete or modify a

What happens when a service receives a request but has no ready pods?

放肆的年华 提交于 2020-07-20 04:15:09
问题 Having a kubernetes service (of type ClusterIP ) connected to a set of pod s, but none of them are currently ready - what will happen to the request? Will it: fail eagerly timeout wait until a ready pod is available (or forever, whichever is earlier) something else? 回答1: It will time out. Kube-proxy pulls out the IP addresses from healthy pods and sets as endpoints of the service (backends). Also, note that all kube-proxy does is to re-write the iptables when you create, delete or modify a

Unhealthy nodes for load balancer when using nginx ingress controller on GKE

江枫思渺然 提交于 2020-06-24 11:44:09
问题 I have set up the nginx ingress controller following this guide. The ingress works well and I am able to visit the defaultbackend service and my own service as well. But when reviewing the objects created in the Google Cloud Console, in particular the load balancer object which was created automatically, I noticed that the health check for the other nodes are failing: Is this because the ingress controller process is only running on the one node, and so it's the only one that passes the