Kubernetes

EKS : could not find any suitable subnets for creating the ELB

微笑、不失礼 提交于 2021-02-08 15:22:54
问题 I am trying to expose a service to the outside world using the loadBalancer type service. For that, i have followed this doc https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/ My loadbalancer.yaml looks like this apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 But the load balancer is not creating as expected I am getting the following error Warning

EKS : could not find any suitable subnets for creating the ELB

∥☆過路亽.° 提交于 2021-02-08 15:22:10
问题 I am trying to expose a service to the outside world using the loadBalancer type service. For that, i have followed this doc https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/ My loadbalancer.yaml looks like this apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 But the load balancer is not creating as expected I am getting the following error Warning

Append release timestamp to helm template name

心不动则不痛 提交于 2021-02-08 12:33:47
问题 I'm struggling with finding a way to include the Release.Time builtin as part of a helm name. If I just include it as: name: {{ template "myapp.name" . }}-{{ .Release.Time }} Dry run shows this: name: myapp-seconds:1534946206 nanos:143228281 It seems like this is a *timestamp.Timestamp object or something because {{ .Release.Time | trimPrefix "seconds:" | trunc 10 }} outputs wrong type for value; expected string; got *timestamp.Timestamp I can hack the string parsing by doing: {{ .Release

Apache Flink 进阶(四):Flink on Yarn / K8s 原理剖析及实践

元气小坏坏 提交于 2021-02-08 11:57:41
Apache Flink 进阶(四):Flink on Yarn / K8s 原理剖析及实践 周凯波(宝牛) Flink 中文社区 本文根据 Apache Flink 进阶篇系列直播课程整理而成,由阿里巴巴技术专家周凯波(宝牛)分享,主要介绍 Flink on Yarn / K8s 的原理及应用实践,文章将从 Flink 架构、Flink on Yarn 原理及实践、Flink on Kubernetes 原理剖析三部分内容进行分享并对 Flink on Yarn/Kubernetes 中存在的部分问题进行了解答。 Flink 架构概览 Flink 架构概览–Job 用户通过 DataStream API、DataSet API、SQL 和 Table API 编写 Flink 任务,它会生成一个JobGraph。JobGraph 是由 source、map()、keyBy()/window()/apply() 和 Sink 等算子组成的。当 JobGraph 提交给 Flink 集群后,能够以 Local、Standalone、Yarn 和 Kubernetes 四种模式运行。 Flink 架构概览–JobManager JobManager的功能主要有: 将 JobGraph 转换成 Execution Graph,最终将 Execution Graph 拿来运行;

K8s pod priority & outOfPods

本秂侑毒 提交于 2021-02-08 11:38:49
问题 we had the situation that the k8s-cluster was running out of pods after an update (kubernetes or more specific: ICP) resulting in "OutOfPods" error messages. The reason was a lower "podsPerCore"-setting which we corrected afterwards. Until then there were pods with a provided priorityClass (1000000) which cannot be scheduled. Others - without a priorityClass (0) - were scheduled. I assumed a different behaviour. I thought that the K8s scheduler would kill pods with no priority so that a pod

K8s pod priority & outOfPods

坚强是说给别人听的谎言 提交于 2021-02-08 11:37:05
问题 we had the situation that the k8s-cluster was running out of pods after an update (kubernetes or more specific: ICP) resulting in "OutOfPods" error messages. The reason was a lower "podsPerCore"-setting which we corrected afterwards. Until then there were pods with a provided priorityClass (1000000) which cannot be scheduled. Others - without a priorityClass (0) - were scheduled. I assumed a different behaviour. I thought that the K8s scheduler would kill pods with no priority so that a pod

K8s pod priority & outOfPods

☆樱花仙子☆ 提交于 2021-02-08 11:36:00
问题 we had the situation that the k8s-cluster was running out of pods after an update (kubernetes or more specific: ICP) resulting in "OutOfPods" error messages. The reason was a lower "podsPerCore"-setting which we corrected afterwards. Until then there were pods with a provided priorityClass (1000000) which cannot be scheduled. Others - without a priorityClass (0) - were scheduled. I assumed a different behaviour. I thought that the K8s scheduler would kill pods with no priority so that a pod

Running kubectl patch --local fails due to missing config

三世轮回 提交于 2021-02-08 11:33:23
问题 I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use kubectl patch --local=true to update the image. This used to work flawlessly until now. Today the workflow started to fail with a Missing or incomplete configuration info error. I am running kubectl with --local flag so the config should not be needed. Does anyone know what could be the reason why kubectl suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and

Running kubectl patch --local fails due to missing config

妖精的绣舞 提交于 2021-02-08 11:33:01
问题 I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use kubectl patch --local=true to update the image. This used to work flawlessly until now. Today the workflow started to fail with a Missing or incomplete configuration info error. I am running kubectl with --local flag so the config should not be needed. Does anyone know what could be the reason why kubectl suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and

Authorization architecture in microservice cluster

荒凉一梦 提交于 2021-02-08 11:21:43
问题 I have a project with microservice architecture (on Docker and Kubernetes), and 2 main apps are written in Python using AIOHTTP and Django (also there are and Ingress proxy, static files server, a couple more made with NginX). I'd like to split these Python apps into separate smaller microservices, but to accomplish this probably I also should move authentication in a separate app. But how can I do this? Probably I should also add that I'm asking not about specific authentication methods like