amazon-ec2

RDS to S3 using pg_dump directly (without intermediary)

独自空忆成欢 提交于 2020-02-18 09:47:44
问题 It's possible run pg_dump in the RDS or in a S3 (without using a intermediary like ec2 to execute the command) 回答1: You should be able to access it as long as your db security group allows external access to port 5432 (default for postgres). Then you can just run: pg_dump -h <database_host> -U <username> <database> Keep in mind that your connection will not be encrypted. AFAIK, there is no interface in AWS between RDS and S3, so you would have to use an intermediary to transfer the data to S3

Multiple availability zones with terraform on AWS

半腔热情 提交于 2020-02-18 05:07:17
问题 The VPC I'm working on has 3 logical tiers: Web, App and DB. For each tier there is one subnet in each availability zone. Total of 6 subnets in the region I'm using. I'm trying to create EC2 instances using a module and the count parameter but I don't know how to tell terraform to use the two subnets of the App tier. An additional constraint I have is to use static IP addresses (or a way to have a deterministic private name) I'm playing around with the resource resource "aws_instance" "app

Multiple availability zones with terraform on AWS

ⅰ亾dé卋堺 提交于 2020-02-18 05:06:59
问题 The VPC I'm working on has 3 logical tiers: Web, App and DB. For each tier there is one subnet in each availability zone. Total of 6 subnets in the region I'm using. I'm trying to create EC2 instances using a module and the count parameter but I don't know how to tell terraform to use the two subnets of the App tier. An additional constraint I have is to use static IP addresses (or a way to have a deterministic private name) I'm playing around with the resource resource "aws_instance" "app

Multiple availability zones with terraform on AWS

こ雲淡風輕ζ 提交于 2020-02-18 05:04:22
问题 The VPC I'm working on has 3 logical tiers: Web, App and DB. For each tier there is one subnet in each availability zone. Total of 6 subnets in the region I'm using. I'm trying to create EC2 instances using a module and the count parameter but I don't know how to tell terraform to use the two subnets of the App tier. An additional constraint I have is to use static IP addresses (or a way to have a deterministic private name) I'm playing around with the resource resource "aws_instance" "app

Celery tasks received but not executing

北城以北 提交于 2020-02-18 04:59:28
问题 I have Celery tasks that are received but will not execute. I am using Python 2.7 and Celery 4.0.2. My message broker is Amazon SQS. This the output of celery worker : $ celery worker -A myapp.celeryapp --loglevel=INFO [tasks] . myapp.tasks.trigger_build [2017-01-12 23:34:25,206: INFO/MainProcess] Connected to sqs://13245:**@localhost// [2017-01-12 23:34:25,391: INFO/MainProcess] celery@ip-111-11-11-11 ready. [2017-01-12 23:34:27,700: INFO/MainProcess] Received task: myapp.tasks.trigger_build

AWS EC2 Connection closed by when trying ssh into instance

拜拜、爱过 提交于 2020-02-17 18:00:13
问题 recently I set up a new EC2 instance. The next day I was not able to connect to my instance via ssh. I could connect and disconnect the day before, I swear I did nothing. here is ssh debug info: ssh -i webserver.pem -v ubuntu@my.elastic.ip OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my.elastic.ip [my.elastic.ip] port 22. debug1: Connection established. debug1: identity

ECS service did not stabilize

坚强是说给别人听的谎言 提交于 2020-02-05 08:26:15
问题 Answer did not have pointers for this problem, because rollback deletes the stack. Below is the CloudFormation template, written to launch Jenkins docker container in ECS container instance(DesiredCount: 1), in default public subnet. Jenkins docker image is publicly available in dockerhub, which is launched in ECS container instance. We used ECS optimised AMI image( ami-05958d7635caa4d04 ) ca-central-1 region, that run docker version 18.06.1 . { "AWSTemplateFormatVersion": "2010-09-09",

How to deploy create-react-app in AWS EC2

不想你离开。 提交于 2020-02-03 05:29:09
问题 I am using react-router so I want to host in AWS Ec2. How to deploy the app and run permanently in the background or let me know if any other way 回答1: You can use Amazon S3. Do npm run build in your local instance. Upload the files to S3 bucket instance. Static website hosting can be chosen. https://s3.console.aws.amazon.com/s3/buckets 回答2: https://medium.com/@spiromifsud/deploying-an-amazon-ec2-build-server-for-reactjs-with-jenkins-and-github-3195d2242aae I deployed my app on EC2 with the

AWS Load Balancing a Node.js App on port 3000

跟風遠走 提交于 2020-02-03 04:45:09
问题 I've got a Node.js Express web app that is using the default port 3000 and responds fine on an Ubuntu EC2 instance by elastic ip. I'm trying to setup Load Balancing built into AWS and can't seem to get a good health check to pass Setup 2 ubuntu servers that server the app fine on port 3000. Set the load balancer listeners for port 80 to route to Instance port 3000 and also tried routing 3000 to 3000. Added the amazon-elb/amazon-elb-sg security group to my instance security groups just in case

How to execute shell_exec on EC2 Linux

江枫思渺然 提交于 2020-02-02 15:34:07
问题 I am running an API on a EC2 Linux instance. I try to to execute a Python script from a PHP file. The path to the PHP file is /var/www/html/droptop/api/event/test.php . The path to the Python script is /var/www/html/droptop/blacklist/profanity.py . The Python script recieves two strings and checks whether one of these two strings contain objectionable content via Profanity Check . Then it returns 0 if no objectionable content was found, otherwise it returns 1 . However, shell_exec always