amazon-vpc

How should a .dockercfg file be hosted in a Mesosphere-on-AWS setup so that only Mesosphere can use it?

与世无争的帅哥 提交于 2019-12-06 03:59:54
问题 We have set up a test cluster with Mesosphere on AWS, in a private VPC. We have some Docker images which are public, which are easy enough to deploy. However most of our services are private images, hosted on the Docker Hub private plan, and require authentication to access. Mesosphere is capable of private registry authentication, but it achieves this in a not-exactly-ideal way: a HTTPS URI to a .dockercfg file needs to be specified in all Mesos/Marathon task definitions. As the title

DNS problem on AWS EKS when running in private subnets

冷暖自知 提交于 2019-12-05 20:34:59
问题 I have an EKS cluster setup in a VPC. The worker nodes are launched in private subnets. I can successfully deploy pods and services. However, I'm not able to perform DNS resolution from within the pods. (It works fine on the worker nodes, outside the container.) Troubleshooting using https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ results in the following from nslookup (timeout after a minute or so): Server: 172.20.0.10 Address 1: 172.20.0.10 nslookup: can't

Is it possible to launch an RDS instance without a VPC?

夙愿已清 提交于 2019-12-05 18:19:08
I'm trying to insert records into a Postgres database in RDS from a Lambda function. My Node.js lambda function works correctly when run locally, but the database connection times out when run in AWS. I've read several articles and tutorials which suggest that AWS Lambda functions cannot access RDS instances that are within a VPC. For example: http://ashiina.github.io/2015/01/amazon-lambda-first-impression/ Unfortunately; it seems I am unable to create an RDS instance that exists outside of a VPC. At this dropdown I would expect to be able to select an option for "No VPC" or something along

App running in Docker on EB refuses connecting to self

﹥>﹥吖頭↗ 提交于 2019-12-05 16:57:49
I have a Play 2 web application, which I deploy to Elastic Beanstalk using Docker. In this web app, I start an Akka cluster. The starting procedure involves adding all nodes in the autoscaling group as seed nodes (including itself). On the first deploy to EB I specify to deploy to a VPC (I only select one availability zone). When I run the app and start the cluster, I get the following message: AssociationError [akka.tcp://cluster@localhost:2551] -> [akka.tcp://cluster@172.31.13.25:2551]: Error [Invalid address: akka.tcp://cluster@172.31.13.25:2551] [ akka.remote.InvalidAssociation: Invalid

AWS Lambda times out connecting to RedShift

▼魔方 西西 提交于 2019-12-05 14:19:41
My Redshift cluster is in a private VPC. I've written the following AWS Lamba in Node.js which should connect to Redshift (dressed down for this question): 'use strict'; console.log('Loading function'); const pg = require('pg'); exports.handler = (event, context, callback) => { var client = new pg.Client({ user: 'myuser', database: 'mydatabase', password: 'mypassword', port: 5439, host: 'myhost.eu-west-1.redshift.amazonaws.com' }); // connect to our database console.log('Connecting...'); client.connect(function (err) { if (err) throw err; console.log('CONNECTED!!!'); }); }; I keep getting Task

Cant connect dynamo Db from my vpc configured lambda function

ぃ、小莉子 提交于 2019-12-05 13:18:11
问题 i need to connect elastic cache and dynamo db from a single lambda function. My code is exports.handler = (event, context, callback) => { var redis = require("redis"); var client; function connectRedisClient() { client = redis.createClient(6379, "dgdfgdfgdfgdfgdfgfd.use1.cache.amazonaws.com", { no_ready_check: true }); } connectRedisClient(); client.set('sampleKey', 'Hello World', redis.print); console.log("set worked"); client.quit(); var AWS = require("aws-sdk"); var docClient = new AWS

Invoking the lambda gets timed out after adding VPC configurations

痞子三分冷 提交于 2019-12-05 09:26:40
I am using serverless framework for creating lambdas. I created a simple Lambda function, which queries from an mongo instance and returns the response. Initially, I created the mongo instance with publicIp and made the Lambda access that instance with publicIP. It worked well. Now, in order to increase the security, I added the VPC configuration to the Lambda. Here is my serverless.yml: functions: graphql: handler: handler.graphql iamRoleStatements: - Effect: Allow Resource: "*" Action: - ec2:CreateNetworkInterface - ec2:DescribeNetworkInterfaces - ec2:DetachNetworkInterface - ec2

Elastic Beanstalk: Migrate DB Security Group to VPC Security Group

六眼飞鱼酱① 提交于 2019-12-05 02:49:29
问题 When trying to deploy my application, I recently got the following error: ERROR: Service:AmazonCloudFormation, Message:Stack named 'awseb-e-123-stack' aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS' Reason: The following resource(s) failed to update: [AWSEBRDSDatabase]. ERROR: Updating RDS database named: abcdefg12345 failed Reason: DB Security Groups can no longer be associated with this DB Instance. Use VPC Security Groups instead. ERROR: Failed to deploy application. How do

Terraform throws “groupName cannot be used with the parameter subnet” or “VPC security groups may not be used for a non-VPC launch”

自古美人都是妖i 提交于 2019-12-04 23:30:33
When trying to figure out how to configure a aws_instance with AWS VPC the following errors occur: * Error launching source instance: InvalidParameterCombination: The parameter groupName cannot be used with the parameter subnet status code: 400, request id: [] or * Error launching source instance: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch status code: 400, request id: [] This is due to how a security group is associated with an instance. Without a subnet it is OK to associate it using the security group's name: resource "aws_instance" "server" { ...

Automatically add an entry in /etc/hosts file in newly launched amazon ec2 instance

杀马特。学长 韩版系。学妹 提交于 2019-12-04 21:27:36
Things I have done: $ vi /etc/hosts Added IPAddress Hostname I want to automate this process like every new instance I launch should have an entry in /etc/hosts Guess you need add the host itself in /etc/hosts. Put this in user data when you create a new ec2 instance #!/usr/bin/env bash echo `ec2-metadata -o|cut -d: -f2` " " `ec2-metadata -h |cut -d: -f2` >> /etc/hosts 来源: https://stackoverflow.com/questions/27739618/automatically-add-an-entry-in-etc-hosts-file-in-newly-launched-amazon-ec2-insta