amazon-rds

java.net.ConnectException: Connection refused (Connection refused) [duplicate]

a 夏天 提交于 2019-12-12 04:35:16
问题 This question already has an answer here : AWS RDS How to set up a MySQL Database (1 answer) Closed 2 years ago . I have an AWS Elastic Beanstalk instance with Tomcat running a Java RESTful service installed. I then also have a MySQL database instance set up on AWS-RDS . I have the following active security group that allows all inbound and outbound traffic. I am able to connect to the database with MySQL Workbench: Which suggests the Database is okay. So I think the issue is with my Java

Ansible: Create new RDS DB from last snapshot of another DB

最后都变了- 提交于 2019-12-11 19:40:25
问题 Promote command does not seem to work on the version of Ansible that I am using. So I am trying to create a new database as a replica of an existing one and after making it master, delete the source database. I was trying to do it like this: Make replica Promote replica Delete source database But now I am thinking of this: Create new database from source database last snapshot [as master from the beginning] Delete the source database How would that playbook go? My playbook: - hosts: localhost

updated : AWS Lambda is not able to connect to MySQL

倖福魔咒の 提交于 2019-12-11 16:19:07
问题 I was not able to connect to MySQL using AWS Lambda with Node.js. I had tried configured the security groups for AWS MySQL and Lambda. When I used console.log it shows correct response from the data base as the data from db : rk , but when I tried to test it was not showing the correct response. Below was the logs and the index.js files and logs. Can anybody please guide me ? index.js (i had updated my code as below ): var mysql = require('mysql'); var pool = mysql.createPool({ host :

storage type error with terraform and aurora postgresql

吃可爱长大的小学妹 提交于 2019-12-11 15:59:17
问题 I am currently working on deploying a Aurora postgres instance in AWS thanks to Terraform. Here is my declaration resource "aws_db_instance" "postgreDatabase" { name = "validName" storage_type = "gp2" allocated_storage = "25" engine = "aurora-postgresql" engine_version = "10.5" instance_class = "db.r4.large" username = "validUsername" password = "validPassword" } Using this declaration throws the following error: aws_db_instance.postgreDatabase: Error creating DB Instance:

PostgreSQL queries not killed on app server shutdown

若如初见. 提交于 2019-12-11 15:18:40
问题 I have a WildFly which hosts an app which is calling some long running SQL queries (say queries or SP calls which take 10-20 mins or more). Previously this WildFly was pointing to SQL Server 2008, now to Postgres 11. Previously when I killed/rebooted WildFly, I had noticed that pretty quickly (if not instantaneously) the long-running SP/query calls that were triggered from the Java code (running in WildFly) were being killed too . Now... with Postgres I noticed that these long running queries

Connect Amazon RDS with Heroku?

岁酱吖の 提交于 2019-12-11 12:45:21
问题 I have created my Amazon RDS in Oregon region. I have to configure this with my Heroku App . I am able to access RDS from my local machine. But this not happens with Heroku . I also don't have liberty to create Security Groups there. I am getting error as ERROR 2003 (HY000): Can't connect to MySQL server on 'RDS hostname' (111) something . I won't understand that because by my local machine it was resolving host but not from Heroku . 回答1: I Found solution myself, you just have to add inbound

Transfer files from s3 bucket to amazon RDS database

南笙酒味 提交于 2019-12-11 12:25:53
问题 I am trying to load data from the s3 bucket to amazon RDS database. I know this is not the programming question. But I really appreciate help. I have used the code below: aws rds restore-db-instance-from-s3 ^ --allocated-storage 250 ^ --db-instance-identifier myidentifier ^ --db-instance-class db.m4.large ^ --engine mysql ^ --master-user-name masterawsuser ^ --master-user-password masteruserpassword ^ --s3-bucket-name mybucket ^ --s3-ingestion-role-arn arn:aws:iam::account-number:role

AWS Lambda connect via PG.js to RDS Postgres database (connection made, no timeout, but no database?)

血红的双手。 提交于 2019-12-11 11:55:46
问题 I have my lambda function is trying to connect to an RDS PostGreSQL DB. Since I use https://serverless.com/ to deploy the function (sets up my cloudfront) it puts the LF in a separate VPC from the RDS DB. Not a big issue. If you read: https://docs.aws.amazon.com/lambda/latest/dg/services-rds-tutorial.html you see you can setup the serverless.yml file (as below) with the subnet, and security Group IDs, and then give a role to the Lambda Function that has AWSLambdaVPCAccessExecutionRole (I gave

Change which RDS database an AWS EB environment uses

≯℡__Kan透↙ 提交于 2019-12-11 11:53:55
问题 How can I change which RDS database my EB environment uses? I.e. where are the settings that specify this? I have cloned an environment and want to change the database it uses to an existing RDS database rather than the one that was created when the environment was cloned. 回答1: Is your environment using EB? In that case you can set in the environment variables: in case you are reading them. Maybe that might be useful: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html

Using Python to upload large csv files to Postgres RDS in AWS

断了今生、忘了曾经 提交于 2019-12-11 10:32:53
问题 What's the easiest way to load a large csv file into a Postgres RDS database in AWS using Python? To transfer data to a local postgres instance, I have previously used a psycopg2 connection to run SQL statements like: COPY my_table FROM 'my_10gb_file.csv' DELIMITER ',' CSV HEADER; However, when executing this against a remote AWS RDS database, this generates an error because the .csv file is on my local machine rather than the database server: ERROR: must be superuser to COPY to or from a