aws

AWS Lambda Node.js Full-ICU

匿名 (未验证) 提交于 2019-12-03 01:34:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I run node.js app localy with this command: $ node --icu-data-dir=node_modules/full-icu app.local.js How to specify icu-data-dir in AWS Lambda environment? Thanks 回答1: As pointed in the node.js docs , you can also use an environment variable, such as NODE_ICU_DATA=/some/directory . You can easily setup environment variables in your lambda instance settings. 文章来源: AWS Lambda Node.js Full-ICU

Configure AWS Cloud9 to use Anaconda Python Environment

匿名 (未验证) 提交于 2019-12-03 01:33:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I want AWS Cloud9 to use the Python version and specific packages from my Anaconda Python environment. How can I achieve this? Where should I look in the settings or configuration? My current setup: I have an AWS EC2 instance with Ubuntu Linux, and I have configured AWS Cloud9 to work with the EC2 instance. I have Anaconda installed on the EC2 instance, and I have created a conda Python3 environment to use, but Cloud9 always wants to use my Linux system's installed Python3 version. 回答1: I finally found something that forces AWS Cloud9 to use

AWS Glue Crawler Not Creating Table

匿名 (未验证) 提交于 2019-12-03 01:33:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I have a crawler I created in AWS Glue that does not create a table in the Data Catalog after it successfully completes. The crawler takes roughly 20 seconds to run and the logs show it successfully completed. CloudWatch log shows: Benchmark: Running Start Crawl for Crawler Benchmark: Classification Complete, writing results to DB Benchmark: Finished writing to Catalog Benchmark: Crawler has finished running and is in ready state I am at a loss as to why the tables in the data catalog are not being created. AWS Docs are not of much

AWS S3 Bucket Permissions - Access Denied

匿名 (未验证) 提交于 2019-12-03 01:33:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows: { "Sid": "someSID", "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", "Principal": { "AWS": [ "arn:aws:iam::123123123123:user/myuid" ] } } My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to

aws CAPABILITY_AUTO_EXPAND console web codepipeline with cloudformation

匿名 (未验证) 提交于 2019-12-03 01:33:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to complete a codepipeline with the cloudformation service and this error is generated. It must be said that the separate cloudformation service works well. The complete error is: JobFailed Requires capabilities: [CAPABILITY_AUTO_EXPAND] (Service: AmazonCloudFormation; Status Code: 400; Error Code: InsufficientCapabilitiesException; Request ID: 1a977102-f829-11e8-b5c6-f7cc8454c4d0) The solutions I have is to add the CAPABILITY_AUTO_EXPAND --capabilities parameter but that only applies to CLI and my case is by web console. 回答1:

Upsert from AWS Glue to Amazon Redshift

匿名 (未验证) 提交于 2019-12-03 01:33:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I understand that there is no direct UPSERT query one can perform directly from Glue to Redshift. Is it possible to implement the staging table concept within the glue script itself? So my expectation is creating the staging table, merging it with destination table and finally deleting it. Can it be achieved within the Glue script? 回答1: Yes, it can be totally achievable. All you would need is to import pg8000 module into your glue job. pg8000 module is the python library which is used to make connection with Amazon Redshift and execute SQL

AWS Glue to Redshift: duplicate data?

匿名 (未验证) 提交于 2019-12-03 01:27:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Here are some bullet points in terms of how I have things setup: I have CSV files uploaded to S3 and a Glue crawler setup to create the table and schema. I have a Glue job setup that writes the data from the Glue table to our Amazon Redshift database using a JDBC connection. The Job also is in charge of mapping the columns and creating the redshift table. By re-running a job, I am getting duplicate rows in redshift (as expected). However, is there way to replace or delete rows before inserting the new data? BOOKMARK functionality is Enable

Sending SMS with Amazon AWS services PHP

匿名 (未验证) 提交于 2019-12-03 01:25:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm having trouble digging through the documentation for Amazon's AWS PHP-sdk. Basically, I just need to send a standard text message to a number. I know it is possible because amazon allows you to send messages through the console directly via this screen: It says something about using the "publish" method, but looking through that documentation really didn't provide any answers. #Publish documentation link Any help or guidance is appreciated. I am currently looking for a solution that uses V2 of the sdk. Thanks in advance. 回答1: No where

HTTP GET to amazon aws from jquery or XMLHttpRequest fails with Origin is not allowed by Access-Control-Allow-Origin

匿名 (未验证) 提交于 2019-12-03 01:25:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Having some bad luck with getting amazon AWS security token from jQuery or XMLHttpRequest. When I send a HTTP GET from jQuery or XmlHttpRequest, I get "Origin http:// MY_IP is not allowed by Access-Control-Allow-Origin.", but if I paste the same URL in my browser, it all goes fine. My code: var url_ = "https://sts.amazonaws.com/?Action=GetSessionToken" + "&DurationSeconds=3600" + "&AWSAccessKeyId=" + AccessKeyId + "&Version=2011-06-15" + "&Timestamp=" + encode(timestamp) + "&Signature=" + encode(hash) + "&SignatureVersion=2&SignatureMethod

Sending SES email from AWS Lambda - Node JS Error “Cannot find module 'nodemailer”

匿名 (未验证) 提交于 2019-12-03 01:25:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have this error message: "errorMessage": "Cannot find module 'nodemailer'" I Googled, and it says install nodemailer. Can someone tell me where exactly do I install this module? I am new to Lambda. My Lambda function is below : var aws = require("aws-sdk"); var nodemailer = require("nodemailer"); var ses = new aws.SES(); var s3 = new aws.S3(); exports.handler = (event, context, callback) => { callback(null, 'Hello from Lambda'); }; 回答1: You'll have to initialize your project locally npm init Install nodemailer - npm i nodemailer You should