Why can't my ECS service register available EC2 instances with my ELB?

风流意气都作罢 提交于 2019-11-29 03:33:31

In the end, it ended up being that my EC2 instances were not being assigned public IP addresses. It appears ECS needs to be able to directly communicate with each EC2 instance, which would require each instance to have a public IP. I was not assigning my container instances public IP addresses because I thought I'd have them all behind a public load balancer, and each container instance would be private.

I had similar symptoms but ended up finding the answer in the log files:

/var/log/ecs/ecs-agent.2016-04-06-03:

2016-04-06T03:05:26Z [ERROR] Error registering: AccessDeniedException: User: arn:aws:sts::<removed>:assumed-role/<removed>/<removed is not authorized to perform: ecs:RegisterContainerInstance on resource: arn:aws:ecs:us-west-2:<removed:cluster/MyCluster-PROD
    status code: 400, request id: <removed>

In my case, the resource existed but was not accessible. It sounds like OP is pointing at a resource that doesn't exist or isn't visible. Are your clusters and instances in the same region? The logs should confirm the details.

In response to other posts:

You do NOT need public IP addresses.

You do need: the ecsServiceRole or equivalent IAM role assigned to the EC2 instance in order to talk to the ECS service. You must also specify the ECS cluster and can be done via user data during instance launch or launch configuration definition, like so:

#!/bin/bash
echo ECS_CLUSTER=GenericSericeECSClusterPROD >> /etc/ecs/ecs.config

If you fail to do this on newly launched instances, you can do this after the instance has launched and then restart the service.

Another problem that might arise is not assigning a role with the proper policy to the Launch Configuration. My role didn't have the AmazonEC2ContainerServiceforEC2Role policy (or the permissions that it contains) as specified here.

It might also be that the ECS agent creates a file in /var/lib/ecs/data that stores the cluster name.

If the agent first starts up with the cluster name of 'default', you'll need to delete this file and then restart the agent.

You definitely do not need public IP addresses for each of your private instances. The correct (and safest) way to do this is setup a NAT Gateway and attach that gateway to the routing table that is attached to your private subnet.

This is documented in detail in the VPC documentation, specifically Scenario 2: VPC with Public and Private Subnets (NAT).

There where several layers of problems in our case. I will list them out so it might give you some idea of the issues to pursue.

My gaol was to have 1 ECS in 1 host. But ECS forces you to have 2 subnets under your VPC and each have 1 instance of docker host. I was trying to just have 1 docker host in 1 availability zone and could not get it to work.

Then the other issue was that the only one of the subnets had an attached internet facing gateway to it. So one of them was not accessible from public.

The end result was DNS was serving 2 IPs for my ELB. And one of the IPs would work and the other did not. So I was seeing random 404s when accessing the NLB using the public DNS.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!