amazon-ec2

Airbnb Airflow using all system resources

拈花ヽ惹草 提交于 2020-05-25 03:23:33
问题 We've set up Airbnb/Apache Airflow for our ETL using LocalExecutor , and as we've started building more complex DAGs, we've noticed that Airflow has starting using up incredible amounts of system resources. This is surprising to us because we mostly use Airflow to orchestrate tasks that happen on other servers, so Airflow DAGs spend most of their time waiting for them to complete--there's no actual execution that happens locally. The biggest issue is that Airflow seems to use up 100% of CPU

Route 53 Record Set on Different Port

杀马特。学长 韩版系。学妹 提交于 2020-05-24 19:19:34
问题 I'm a ruby dev and I just started to learn some Node.js. I'm running an instance on AWS to host my rails apps with passenger + nginx listening on port 80 . Now I would like to host a node.js app on the same instance (t1-micro) and put it to listen on port 8000 . How can I use Route 53 to create a Record Set to point a subdomain.domain.com to my.ip:8000 ? I already tried setting an IPV4 record pointing to my.ip:8000 with no success. Any idea what I'm doing wrong? Can I use nginx to serve my

Route 53 Record Set on Different Port

爷,独闯天下 提交于 2020-05-24 19:19:24
问题 I'm a ruby dev and I just started to learn some Node.js. I'm running an instance on AWS to host my rails apps with passenger + nginx listening on port 80 . Now I would like to host a node.js app on the same instance (t1-micro) and put it to listen on port 8000 . How can I use Route 53 to create a Record Set to point a subdomain.domain.com to my.ip:8000 ? I already tried setting an IPV4 record pointing to my.ip:8000 with no success. Any idea what I'm doing wrong? Can I use nginx to serve my

Route 53 Record Set on Different Port

荒凉一梦 提交于 2020-05-24 19:19:13
问题 I'm a ruby dev and I just started to learn some Node.js. I'm running an instance on AWS to host my rails apps with passenger + nginx listening on port 80 . Now I would like to host a node.js app on the same instance (t1-micro) and put it to listen on port 8000 . How can I use Route 53 to create a Record Set to point a subdomain.domain.com to my.ip:8000 ? I already tried setting an IPV4 record pointing to my.ip:8000 with no success. Any idea what I'm doing wrong? Can I use nginx to serve my

What's the target group port for, when using Application Load Balancer + EC2 Container Service

≡放荡痞女 提交于 2020-05-24 17:20:32
问题 I'm trying to setup an ALB which listens on port 443, load balancing to ECS Docker containers on random ports, lets say I have 2 container instances of the same task definition, listening on port 30000 and 30001. When I try to create a target group in the AWS EC2 Management console, there's a "port" input field with 1-65535 range. What number should I put there? And when I try to create a new service in the AWS EC2 Container Service console, together with a new target group to connect to a

User is reporting that they've unable to SSH into an EC2 instance in AWS?

不羁的心 提交于 2020-05-16 22:36:54
问题 The user's are doing the following: $ ssh -i /Users/user1/key.pem centos@10.12.10.10 The error message received while trying to access is as follows: $ ssh -i /Users/user1/key.pem centos@10.12.10.10 centos@10.12.10.10 : Permission denied (publickey,gssapi-keyex,gssapi-with-mic). 回答1: A novel solution to this particular problem was presented by the AWS support and I felt compelled to share it here, since I hadn't seen it previously. In the past the method most of my colleagues have used

AWS batch to always launch new ec2 instance for each job

帅比萌擦擦* 提交于 2020-05-16 20:48:31
问题 I have setup a batch environment with Managed Compute environment Job Queue Job Definitions The actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved. My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same

AWS batch to always launch new ec2 instance for each job

非 Y 不嫁゛ 提交于 2020-05-16 20:46:59
问题 I have setup a batch environment with Managed Compute environment Job Queue Job Definitions The actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved. My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same

AWS batch to always launch new ec2 instance for each job

折月煮酒 提交于 2020-05-16 20:46:20
问题 I have setup a batch environment with Managed Compute environment Job Queue Job Definitions The actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved. My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same

Why does Apache airflow fail with the command: 'airflow initdb'?

时间秒杀一切 提交于 2020-05-15 17:44:31
问题 I am trying to install airflow on an AWS EC2 instance. The process seems to be pretty well documented by various sources on the web, however, I have run into a problem after I 'pip install' airflow; I get the below error when I execute the command 'airflow initdb': [2019-09-25 13:22:02,329] {__init__.py:51} INFO - Using executor SequentialExecutor Traceback (most recent call last): File "/home/cloud-user/.local/bin/airflow", line 22, in <module> from airflow.bin.cli import CLIFactory File "