amazon-ecs

Who stops and starts the ECS task? and informs ECS service

淺唱寂寞╮ 提交于 2020-01-06 04:51:11
问题 Below is the ECS task definition for an application: SomeappTaskDefinition: Type: "AWS::ECS::TaskDefinition" Properties: ContainerDefinitions: - Name: someapp Image: someaccounthub/someapp Memory: 450 Environment: - Name: DJANGO_SETTINGS_MODULE Value: someapp.settings.release - Name: MYSQL_HOST Value: { "Fn::GetAtt": ["DbInstance", "Endpoint.Address"] } - Name: MYSQL_USER Value: { "Ref": "DbUsername" } - Name: MYSQL_PASSWORD Value: { "Ref": "DbPassword" } MountPoints: - ContainerPath: /var

Connect to MongoDB in separate docker container (AWS ECS)

寵の児 提交于 2020-01-05 04:17:08
问题 I am using AWS ECS and have a container for my frontend (Node app) and for my backend (mongo database). The mongo container is exposing port 27017, but I cannot figure out how to connect to it from my frontend container. If I try to connect to the db using 'mongodb://localhost:27017/db_name' I get an ECONNREFUSED error. I have a service running for both of these task definitions with an ALB for the frontend. I don't have them in the same task definition because it doesn't seem optimal to have

How to ensure to update Docker image on AWS ECS?

风流意气都作罢 提交于 2020-01-02 06:41:18
问题 I use Docker Hub to store a private Docker image, the repository has a webhook that once the image is updated it calls a service I built to: update the ECS task definition update the ECS service deregister the old ECS task definition The service is running accordingly. After it runs ECS creates a new task with the new task definition, stops the task with the old task definition and the service come back with the new definition. The point is that the Docker Image is not updated, once the

Configuring bitbucket pipelines with Docker to connect to AWS

牧云@^-^@ 提交于 2020-01-02 06:40:34
问题 I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I

Configure amazon-ecs slave plugin using Groovy on Jenkins

北城以北 提交于 2020-01-01 15:09:12
问题 I'm trying to configure amazon-ecs-plugin for Jenkins using init.groovy script, but couldn't find and docs on it. I'm new to groovy based configuration automation Tried to get all the properties using import jenkins.model.* import com.cloudbees.jenkins.plugins.amazonecs.* ECSCloud.metaClass.properties.each {println it.name+":\t"+it.type } The Output: regionName: class java.lang.String searchName: class java.lang.String slaveTimoutInSeconds: int searchIndex: interface hudson.search.SearchIndex

Configure amazon-ecs slave plugin using Groovy on Jenkins

人走茶凉 提交于 2020-01-01 15:09:07
问题 I'm trying to configure amazon-ecs-plugin for Jenkins using init.groovy script, but couldn't find and docs on it. I'm new to groovy based configuration automation Tried to get all the properties using import jenkins.model.* import com.cloudbees.jenkins.plugins.amazonecs.* ECSCloud.metaClass.properties.each {println it.name+":\t"+it.type } The Output: regionName: class java.lang.String searchName: class java.lang.String slaveTimoutInSeconds: int searchIndex: interface hudson.search.SearchIndex

ECS Service - Automating deploy with new Docker image

我的梦境 提交于 2019-12-31 20:31:24
问题 I want to automate the deployment of my application by having my ECS service launch with the latest Docker image. From what I've read, the way to deploy a new image version is as follows: Create a new task revision (after updating the image on your Docker repository). Update the service and specify the new revision. This seems to work, but I want to do this all through CLI so I can script it. #2 seems easy enough to do through the AWS CLI with update-service , but I don't see a way to do #1

How to translate docker-compose.yml to Dockerrun.aws.json for Django

流过昼夜 提交于 2019-12-31 19:25:00
问题 I am following the instructions at https://docs.docker.com/compose/django/ to get a basic dockerized django app going. I am able to run it locally without a problem but I am having trouble to deploy it to AWS using Elastic Beanstalk. After reading here, I figured that I need to translate docker-compose.yml into Dockerrun.aws.json for it to work. The original docker-compose.yml is version: '2' services: db: image: postgres web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes:

How can I connect my autoscaling group to my ecs cluster?

天大地大妈咪最大 提交于 2019-12-31 08:35:29
问题 In all tutorials for ECS you need to create a cluster and after that an autoscaling group, that will spawn instances. Somehow in all these tutorials the instances magically show up in the cluster, but noone gives a hint what's connecting the autoscaling group and the cluster. my autoscaling group spawns instances as expected, but they just dont show up on my ecs cluster, who holds my docker definitions. Where is the connection I'm missing? 回答1: I was struggling with this for a while. The key

NLB Target Group TCP HealthChecks

牧云@^-^@ 提交于 2019-12-25 17:57:09
问题 We are using ECS Farget container service for our ruby on rails containers. We have implemented NLB with HTTP health checks in the target group but as we know that "NLB Target Group health checks are out of control". This is consuming our CPU of each container up to 8% so we are thinking of to migrate HTTP HealthChecks to TCP HealthChecks. Can anyone comment on how TCP HealthCheck works??? Does it only connects to port for a health check or it's actually hit the API? Ref: NLB Target Group