amazon-eks

how to whitelist host header in nifi?

和自甴很熟 提交于 2021-02-11 15:54:38
问题 Deployed the nifi in eks cluster when tried to access the nifi from loadbalancer gives following error: System Error The request contained an invalid host header [abc.com] in the request [/nifi]. Check for request manipulation or third-party intercept. Valid host headers are [empty] or: 127.0.0.1 127.0.0.1:8443 localhost localhost:8443 ::1 nifi-deployment-59494c46dc-v4kk6 nifi-deployment-59494c46dc-v4kk6:8443 172.35.3.165 172.35.3.165:8443 How to whitelist loadbalancer dns name in host header

Pod execution role is not found in auth config or does not have all required permissions. How can I debug?

末鹿安然 提交于 2021-02-07 20:41:51
问题 Objective I want o be able to deploy AWS EKS using Fargate. I have successfully made the deployment work with a node_group . However, when I shifted to using Fargate, it seems that the pods are all stuck in the pending state. How my current code looks like I am provisioning using Terraform (not necessarily looking for a Terraform answer). This is how I create my EKS Cluster: module "eks_cluster" { source = "terraform-aws-modules/eks/aws" version = "13.2.1" cluster_name = "${var.project_name}-

Pod execution role is not found in auth config or does not have all required permissions. How can I debug?

半腔热情 提交于 2021-02-07 20:36:41
问题 Objective I want o be able to deploy AWS EKS using Fargate. I have successfully made the deployment work with a node_group . However, when I shifted to using Fargate, it seems that the pods are all stuck in the pending state. How my current code looks like I am provisioning using Terraform (not necessarily looking for a Terraform answer). This is how I create my EKS Cluster: module "eks_cluster" { source = "terraform-aws-modules/eks/aws" version = "13.2.1" cluster_name = "${var.project_name}-

Cross cluster communication in Kubernetes

只谈情不闲聊 提交于 2021-01-29 12:13:31
问题 I have two kubernetes clusters running inside AWS EKS . How can I connect them both so that both can communicate and share data ? On one cluster only stateless applications are running while on another stateful like Redis DB , RabbitMQ etc. Which will be the easiest way to setup communication ? 回答1: If you have a specific cluster to run DBs and other private stateful workloads, then ensure that your worker nodes for that EKS cluster are private. Next step would be to create service resource

Kubernetes AWS Cloudwatch adapter not fetching custom metric value for EKS HPA autoscaling

你。 提交于 2021-01-27 14:22:58
问题 I'm trying to enable AWS EKS autoscaling based on a custom Cloudwatch metric via the Kubernetes Cloudwatch adapter. I have pushed custom metrics to AWS Cloudwatch, and validated they appear in Cloudwatch console as well as are retrievable using the boto3 client get_metric_data. This is the code I use to publish my custom metric to Cloudwatch: import boto3 from datetime import datetime client = boto3.client('cloudwatch') cloudwatch_response = client.put_metric_data( Namespace='TestMetricNS',

Terraform EKS tagging

匆匆过客 提交于 2021-01-20 11:37:28
问题 I am having this issue of Terraform EKS tagging and don't seem to find workable solution to tag all the VPC subnets when a new cluster is created. To provide some context: We have one AWS VPC where we deployment several EKS cluster into the subnets. We do not create VPC or subnets are part of the EKS cluster creation. Therefore, the terraform code creating a cluster doesn't get to tag existing subnets and VPC. Although EKS will add the required tags, they are automatically removed next time

Terraform EKS tagging

我的梦境 提交于 2021-01-20 11:37:10
问题 I am having this issue of Terraform EKS tagging and don't seem to find workable solution to tag all the VPC subnets when a new cluster is created. To provide some context: We have one AWS VPC where we deployment several EKS cluster into the subnets. We do not create VPC or subnets are part of the EKS cluster creation. Therefore, the terraform code creating a cluster doesn't get to tag existing subnets and VPC. Although EKS will add the required tags, they are automatically removed next time

Kubernetes requests not balanced

旧街凉风 提交于 2021-01-05 11:25:37
问题 We've just had an increase in traffic to our kubernetes cluster and I've noticed that of our 6 application pods, 2 of them are seemingly not used very much. kubectl top pods returns the following You can see of the 6 pods, 4 of them are using more than 50% of the CPU (2 vCPU nodes), but two of them aren't really doing much at all. Our cluster is setup on AWS, using the ALB ingress controller. The load balancer is configured to use the Least outstanding requests rather than Round robin in an

Mounting EFS in EKS cluster: example deployment fails

一曲冷凌霜 提交于 2020-12-31 04:54:27
问题 I am currently trying to create an EFS for use within an EKS cluster. I've followed all the instructions, and everything seems to be working for the most part. However, when trying to apply the multiple_pods example deployment from here, the pods cannot succesfully mount the file system. The PV and PVC are both bound and look good, however the pods do not start and yield the following error message: Warning FailedMount 116s (x10 over 6m7s) kubelet, ip-192-168-42-94.eu-central-1.compute

AWS EKS with Fargate pod status pending due to PersistentVolumeClaim not found

£可爱£侵袭症+ 提交于 2020-12-15 07:47:10
问题 I have deployed EKS cluster with Fargate and alb-ingress-access using the following command: eksctl create cluster --name fargate-cluster --version 1.17 --region us-east-2 --fargate --alb-ingress-access A Fargate namespace has also been created. The application being deployed has four containers namely mysql, nginx, redis and web. The YAML files have been applied to the correct namespace. The issue I am having is that after applying the YAML files when I get the pods status I the following