autoscaling

apply HPA for Statefulset in kubernetes?

不问归期 提交于 2020-06-11 08:45:13
问题 I am trying to setup HPA for my statefulset(for elasticsearch) in kubernetes environment. I am planning to scale the statefulset using the cpu utilization. I have created the metric server from https://github.com/stefanprodan/k8s-prom-hpa/tree/master/metrics-server. and my HPA yaml for statefulset is as folows: apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: dz-es-cluster spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: StatefulSet name: dz-es-cluster

How to change --horizontal-pod-autoscaler-sync-period field in kube-controller-manager to 5sec in gke

早过忘川 提交于 2020-04-14 09:55:30
问题 I am trying to set up an horizontal pod auto scaling in GKE. No proper documentation found to reduce the --horizontal-pod-autoscaler-sync-period to 5 sec using kube-controller-manager. In the below link it says there is a possibility of changing the flags: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/ Is there any proper implementation steps to this? 回答1: You are not able do this on GKE, EKS and other managed clusters. In order to change/add flags

Hpa not fetching existing custom metric?

给你一囗甜甜゛ 提交于 2020-01-24 22:13:34
问题 I'm using mongodb-exporter for store/query the metrics via prometheus . I have set up a custom metric server and storing values for that . That is the evidence of prometheus-exporter and custom-metric-server works compatible . Query: kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes" Result: {"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1

Hpa not fetching existing custom metric?

末鹿安然 提交于 2020-01-24 22:12:06
问题 I'm using mongodb-exporter for store/query the metrics via prometheus . I have set up a custom metric server and storing values for that . That is the evidence of prometheus-exporter and custom-metric-server works compatible . Query: kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes" Result: {"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1

Some requests fails during autoscaling in kubernetes

被刻印的时光 ゝ 提交于 2020-01-23 07:50:26
问题 I set up a k8s cluster on microk8s and I ported my application to it. I also added a horizontal auto-scaler which adds pods based on the cpu load. The auto-scaler works fine and it adds pods when there is load beyond the target and when I remove the load after some time it will kill the pods. The problem is I noticed at the exact same moments that the auto-scaler is creating new pods some of the requests fail: POST Response Code : 200 POST Response Code : 200 POST Response Code : 200 POST

Difference between API versions v2beta1 and v2beta2 in Horizontal Pod Autoscaler?

元气小坏坏 提交于 2020-01-23 07:37:08
问题 The Kubernetes Horizontal Pod Autoscaler walkthrough in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it. Thanks in advance. 回答1: The first metrics autoscaling/V2beta1 doesn't allow you to scale your pods based on custom metrics. That only allows you to scale

Service fabric service discovery without port number

℡╲_俬逩灬. 提交于 2020-01-16 14:53:05
问题 I have a service fabric cluster hosting legacy WCF services. Each of the WCF services is allocated a port number and uses the net.tcp protocol. I wonder what is the best way for service discovery and auto-scaling? I tried service fabric's DNS service and let service fabric assigns the port number for each service, however, the client won't know about the dynamic port number. The DNS service can only resolve the ip-address based on service DNS name. Since each service is on a specific port, I

AWS CloudFormation stack fails with error Received 0 SUCCESS signal(s) out of 1

泪湿孤枕 提交于 2020-01-11 09:23:06
问题 My AWS CloudFormation template fails with the error: Received 0 SUCCESS signal(s) out of 1. Unable to satisfy 100% MinSuccessfulInstancesPercent requirement I'm thinking my WaitConditionHandles are not set correctly (or maybe the EC2 instance is not sending one), but not sure how to fix this. Everything (ASG, EC2 instances) does appear to be created correctly in AWS. I'm using the following CloudFormation template: AWSTemplateFormatVersion: "2010-09-09" Description: "Auto Scaling Group"

How does Google App Engine Autoscaling work?

血红的双手。 提交于 2020-01-10 04:01:05
问题 This question is about Google App Engine quotas and instances. I deployed a GAE app without specifying any specific scaling algo. From their docs, it seems like the default is auto-scaling. So when do they scale the app to another instance, i.e. when exactly does a new instance spawn? What request/s causes the second instance to get started and traffic to be split? 回答1: Actually it is fairly well explained. From Scaling dynamic instances: The App Engine scheduler decides whether to serve each

Autoscaling (HPA) failed to get CPU consumption: cannot unmarshal object into Go value of type []v1alpha1.PodMetrics

耗尽温柔 提交于 2020-01-07 02:24:10
问题 I am trying to test the HPA (horizontal pod autoscaling) in my Kubernetes cluster. Heapster is up and running and I think it works well since I'm able to see metrics in Grafana. Also the DNS addon is working perfectly. Looking inside the HPA I can see the error "failed to get CPU consumption and request: failed to unmarshall heapster response: json: cannot unmarshal object into Go value of type []v1alpha1.PodMetrics" $ kubectl describe hpa php-apache Name: php-apache Namespace: default Labels