amazon-ec2

Nodejs with nginx CSRF verification failed. Request aborted

拥有回忆 提交于 2020-01-16 12:06:30
问题 I am new to nginx, i manage to run multiple Nodejs projects on single server with different ports. I used my domain to call my Nodejs apis. when I try to call my api from android error is throwing. if I replace domain with IP address all api cals are working fine. with domain name api call it shows Forbidden (403) CSRF verification failed. Request aborted. You are seeing this message because this site requires a CSRF cookie when submitting forms. This cookie is required for security reasons,

Nodejs with nginx CSRF verification failed. Request aborted

吃可爱长大的小学妹 提交于 2020-01-16 12:06:09
问题 I am new to nginx, i manage to run multiple Nodejs projects on single server with different ports. I used my domain to call my Nodejs apis. when I try to call my api from android error is throwing. if I replace domain with IP address all api cals are working fine. with domain name api call it shows Forbidden (403) CSRF verification failed. Request aborted. You are seeing this message because this site requires a CSRF cookie when submitting forms. This cookie is required for security reasons,

Unable to debug using PhpStorm with EC2

好久不见. 提交于 2020-01-16 08:40:33
问题 I am trying to configure PhpStorm on my local machine and repository is on Amazon EC2. I used SSH tunnelling also but unable to debug using remote debugger. Here are my configuration files I have placed same configuration in /etc/php-5.6.d/xdebug.ini /etc/php.ini zend_extension=/usr/lib64/php/5.6/modules/xdebug.so xdebug.remote_enable=1 ; Enable xdebug xdebug.remote_autostart=Off ; Only start xdebug on demand, not on every request xdebug.remote_host=127.0.0.1 ; This is unused if the next

InvalidClientTokenID error when running Terraform Plan/Apply

假装没事ソ 提交于 2020-01-16 01:47:33
问题 I'm setting up a HA cluster in AWS using Terraform and user data. My main.tf looks like this: provider "aws" { access_key = "access_key" secret_key = "secret_key" } resource "aws_instance" "etcd" { ami = "${var.ami}" // coreOS 17508 instance_type = "${var.instance_type}" key_name = "${var.key_name}" key_path = "${var.key_path}" count = "${var.count}" region = "${var.aws_region}" user_data = "${file("cloud-config.yml")}" subnet_id = "${aws_subnet.k8s.id}" private_ip = "${cidrhost("10.43.0.0/16

ECS cluster cannot run tasks in private subnet when using EC2

橙三吉。 提交于 2020-01-16 01:19:10
问题 I have a task definition that configured to use awsvpc network mode. according to this: Only private subnets are supported for the awsvpc network mode. Because tasks do not receive public IP addresses, a NAT gateway is required for outbound internet access, and inbound internet traffic should be routed through a load balancer. I set up a NAT gateway in a public subnet(that has internet gateway) and config route table in the private subnet to send the traffic to NAT gateway. But when I want to

Get volume id from newly created ebs volume using ansible

我的梦境 提交于 2020-01-16 01:12:06
问题 I used ansible's ec2_vol module to create an ebs volume. I saw the source code and found that it internally calls create_volume() method of boto with user specified parameters. I want to register the return value of ec2_vol module and get the volume_ids of newly created volumes. As of now my playbook looks like - name: Attach a volume to previously created instances local_action: ec2_vol instance={{item.id}} volume_size=5 aws_access_key={{aa_key}} aws_secret_key={{as_key}} region={{region}}

puma looks OK, but cycles through life, defunct, life, defunct, every 10 seconds or so

旧城冷巷雨未停 提交于 2020-01-15 14:30:51
问题 I am running a rails app on an EC2 instance running ubuntu, using nginx and puma. I believe I have nginx and puma configured correctly - at least, they were working fine this morning. Now, after having restarted the EC2 instance, I cannot for the life of me get puma to run again properly. nginx is set up properly - for a ssl connection. Puma appears to run properly. When it starts, I see $RAILS_ENV=production bundle exec puma -C config/puma.rb [1501] Puma starting in cluster mode... [1501] *

puma looks OK, but cycles through life, defunct, life, defunct, every 10 seconds or so

烂漫一生 提交于 2020-01-15 14:30:32
问题 I am running a rails app on an EC2 instance running ubuntu, using nginx and puma. I believe I have nginx and puma configured correctly - at least, they were working fine this morning. Now, after having restarted the EC2 instance, I cannot for the life of me get puma to run again properly. nginx is set up properly - for a ssl connection. Puma appears to run properly. When it starts, I see $RAILS_ENV=production bundle exec puma -C config/puma.rb [1501] Puma starting in cluster mode... [1501] *

puma looks OK, but cycles through life, defunct, life, defunct, every 10 seconds or so

落爺英雄遲暮 提交于 2020-01-15 14:30:27
问题 I am running a rails app on an EC2 instance running ubuntu, using nginx and puma. I believe I have nginx and puma configured correctly - at least, they were working fine this morning. Now, after having restarted the EC2 instance, I cannot for the life of me get puma to run again properly. nginx is set up properly - for a ssl connection. Puma appears to run properly. When it starts, I see $RAILS_ENV=production bundle exec puma -C config/puma.rb [1501] Puma starting in cluster mode... [1501] *

Sidekiq Broken Pipe Error

只谈情不闲聊 提交于 2020-01-15 10:43:15
问题 I am attempting to migrate from Heroku to AWS, but my Sidekiq jobs keep failing with the following error: Errno::EPIPE: Broken pipe @ io_write - <STDOUT> I can successfully run jobs from the console using perform_now , and everything works just fine in Heroku, so I am presuming the issue lies somewhere with my AWS setup. I have seen references to improper daemonization around Stack Overflow and Github but not sure how to solve the problem. Right now I am launching my processes with the