Pod execution role is not found in auth config or does not have all required permissions. How can I debug?

末鹿安然 提交于 2021-02-07 20:41:51

问题


Objective

I want o be able to deploy AWS EKS using Fargate. I have successfully made the deployment work with a node_group. However, when I shifted to using Fargate, it seems that the pods are all stuck in the pending state.

How my current code looks like

I am provisioning using Terraform (not necessarily looking for a Terraform answer). This is how I create my EKS Cluster:

module "eks_cluster" {
  source                            = "terraform-aws-modules/eks/aws"
  version                           = "13.2.1"
  cluster_name                      = "${var.project_name}-${var.env_name}"
  cluster_version                   = var.cluster_version
  vpc_id                            = var.vpc_id
  cluster_enabled_log_types         = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  enable_irsa                       = true
  subnets                           = concat(var.private_subnet_ids, var.public_subnet_ids)
  create_fargate_pod_execution_role = true
  write_kubeconfig                  = false
  fargate_pod_execution_role_name   = "${var.project_name}-role"
  # Assigning worker groups
  node_groups = {
    my_nodes = {
      desired_capacity = 1
      max_capacity     = 1
      min_capacity     = 1
      instance_type    = var.nodes_instance_type
      subnets          = var.private_subnet_ids
    }
  }
}

And this is how I provision the Fargate profile:

//#  Create EKS Fargate profile
resource "aws_eks_fargate_profile" "fargate_profile" {
  cluster_name           = module.eks_cluster.cluster_id
  fargate_profile_name   = "${var.project_name}-fargate-profile-${var.env_name}"
  pod_execution_role_arn = aws_iam_role.fargate_iam_role.arn
  subnet_ids             = var.private_subnet_ids

  selector {
    namespace = var.project_name
  }
}

And this is how I created and attach the required policies:

//# Create IAM Role for Fargate Profile
resource "aws_iam_role" "fargate_iam_role" {
  name                  = "${var.project_name}-fargate-role-${var.env_name}"
  force_detach_policies = true
  assume_role_policy    = jsonencode({
    Statement = [{
      Action    = "sts:AssumeRole"
      Effect    = "Allow"
      Principal = {
        Service = "eks-fargate-pods.amazonaws.com"
      }
    }]
    Version   = "2012-10-17"
  })
}

# Attach IAM Policy for Fargate
resource "aws_iam_role_policy_attachment" "fargate_pod_execution" {
  role       = aws_iam_role.fargate_iam_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
}

What I have tried but seems not to work

Running kubectl describe pod I get:

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  14s   fargate-scheduler  Misconfigured Fargate Profile: fargate profile fargate-airflow-fargate-profile-dev blocked for new launches due to: Pod execution role is not found in auth config or does not have all required permissions for launching fargate pods.

Other things I have tried but without success

I have tried mapping the role via the module's feature like:

module "eks_cluster" {
  source                            = "terraform-aws-modules/eks/aws"
  version                           = "13.2.1"
  cluster_name                      = "${var.project_name}-${var.env_name}"
  cluster_version                   = var.cluster_version
  vpc_id                            = var.vpc_id
  cluster_enabled_log_types         = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  enable_irsa                       = true
  subnets                           = concat(var.private_subnet_ids, var.public_subnet_ids)
  create_fargate_pod_execution_role = true
  write_kubeconfig                  = false
  fargate_pod_execution_role_name   = "${var.project_name}-role"
  # Assigning worker groups
  node_groups = {
    my_nodes = {
      desired_capacity = 1
      max_capacity     = 1
      min_capacity     = 1
      instance_type    = var.nodes_instance_type
      subnets          = var.private_subnet_ids
    }
  }
# Trying to map role
  map_roles = [
    {
      rolearn  = aws_eks_fargate_profile.airflow.arn
      username = aws_eks_fargate_profile.airflow.fargate_profile_name
      groups   = ["system:*"]
    }
  ]
}

But my attempt was not successful. How can I debug this issue? And what is the cause behind it?


回答1:


Okay, I see your problems. I just fixed mine, too, though I used different methods.

In your eks_cluster module, you already tell the module to create the role and provide a name to it, so there's no need to create a role resource later. The module should handle it for you, including populating the aws-auth configmap within Kubernetes.

In your aws_eks_fargate_profile resource, you should use the role provided by the module, i.e. pod_execution_role_arn = module.eks_cluster.fargate_profile_arns[0].

I believe fixing those up should solve your issue for the first configuration attempt.


For your second attempt, the map_roles input is for IAM roles, but you're supplying info about Fargate profiles. You want to do one of two things:

  1. Disable the module creating your roles (create_fargate_pod_execution_role and fargate_pod_execution_role_name) and instead create your own IAM role similarly to how you did in the first configuration and supply that info to map_roles.
  2. Remove map_roles and in your Fargate profile reference the IAM role generated by the module, similarly to the solution for your first configuration.

If any of this is confusing, please let me know. It seems you're really close!



来源:https://stackoverflow.com/questions/65543681/pod-execution-role-is-not-found-in-auth-config-or-does-not-have-all-required-per

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!