amazon-s3

accessing s3 from lambda within VPC in aws-go-sdk

我的未来我决定 提交于 2020-02-06 04:34:14
问题 I'm just started on using aws-sdk-go and notice that the s3 requests are using http/https rather than s3 protocol. How can I read the object in s3 from my lambda within vpc using aws-sdk-go? And I don't want to use NAT Gateway. I can do this in NodeJS but is there any way for aws-go-sdk to do the same? Thanks! 回答1: To access S3 within a VPC without an internet gateway you need to use a S3 Endpoint 回答2: This code snippet shows how to use aws-go-sdk to list S3 buckets for region us-east-1

accessing s3 from lambda within VPC in aws-go-sdk

最后都变了- 提交于 2020-02-06 04:33:28
问题 I'm just started on using aws-sdk-go and notice that the s3 requests are using http/https rather than s3 protocol. How can I read the object in s3 from my lambda within vpc using aws-sdk-go? And I don't want to use NAT Gateway. I can do this in NodeJS but is there any way for aws-go-sdk to do the same? Thanks! 回答1: To access S3 within a VPC without an internet gateway you need to use a S3 Endpoint 回答2: This code snippet shows how to use aws-go-sdk to list S3 buckets for region us-east-1

Connect AWS s3 bucket and route 53 to godaddy domain

有些话、适合烂在心里 提交于 2020-02-06 02:39:47
问题 I bought this domain with GoDaddy: "howtoripen.com". When I put in URL "howtoripen.com" it loads me GoDaddy LandPage: https://i.imagesup.co/images2/bd5fc5bb955d4442c9d17bff3b05a79834a88490.png I have created a bucket in s3 and configured things in route 53: https://i.imagesup.co/images2/5d0e25f69be70a468be4e0085404c81d102e0715.png And I can't see my HTML, CSS and js files that I uploaded to the bucket. When I click on endpoint URL everything seems fine: http://howtoripen.com.s3-website.eu

Reading S3 data from Google's dataproc

只愿长相守 提交于 2020-02-05 04:07:05
问题 I'm running a pyspark application through Google's dataproc on a cluster I created. In one stage, the application needs to access a directory in an Amazon S3 directory. At that stage, I get the error: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively). I logged onto the headnode of the cluster and set the /etc/boto.cfg with my AWS_ACCESS

Scala & DataBricks: Getting a list of Files

久未见 提交于 2020-02-04 22:58:26
问题 I am trying to make a list of files in an S3 bucket on Databricks within Scala, and then split by regex. I am very new to Scala. The python equivalent would be all_files = map(lambda x: x.path, dbutils.fs.ls(folder)) filtered_files = filter(lambda name: True if pattern.match(name) else False, all_files) but I want to do this in Scala. From https://alvinalexander.com/scala/how-to-list-files-in-directory-filter-names-scala import java.io.File def getListOfFiles(dir: String):List[File] = { val d

Can Hive table automatically update when underlying directory is changed

左心房为你撑大大i 提交于 2020-02-04 05:14:09
问题 If I build a Hive table on top of some S3 (or HDFS) directory like so: Create external table newtable (name string) row format delimited fields terminated by ',' stored as textfile location 's3a://location/subdir/'; When I add files to that S3 location, the Hive table doesn't automatically update. The new data is only included if I create a new Hive table on that location. Is there a way to build a Hive table (maybe using partitions) so that whenever new files are added to the underlying

Boto3: upload file from base64 to S3

只谈情不闲聊 提交于 2020-02-04 04:09:05
问题 How can I directly upload a base64 encoded file to S3 with boto3? object = s3.Object(BUCKET_NAME,email+"/"+save_name) object.put(Body=base64.b64decode(file)) I tried to upload the base64 encoded file like this, but then the file is broken. Directly uploading the string without the base64 decoding also doesn't work. Is there anything similar to set_contents_from_string() from boto2? 回答1: I just fixed the problem and found out that the way of uploading was correct, but the base64 string was

Is it possible to predict in sagemaker without using s3

北城余情 提交于 2020-02-04 01:41:48
问题 I have a .pkl which I would like to put into production. I would like to do a daily query of my SQL server and do a prediction on about 1000 rows. The documentation implies I have to load the daily data into s3. Is there a way around this? It should be able to fit in memory no problem. The answer to " is there some kind of persistent local storage in aws sagemaker model training? " says that " The notebook instance is coming with a local EBS (5GB) that you can use to copy some data into it

Rails carrierwave S3 get url with Content-Disposition header

戏子无情 提交于 2020-02-04 01:32:31
问题 We are using carrierwave + aws S3 to upload file, and we need provide a download function. For solution 1, we use: = link_to "Download", file.doc.url, download: file.original_name And it does not work under IE8, click the link will open this file(image). According to This, I should add Content-Disposition header, Then I check aws S3 document, Found I need add response-content-disposition to file.doc.url , Is there any way I can do this in carrierwave, or I could use other ways? Thanks for

Return a default object, without error, when requested object is not found, from S3

偶尔善良 提交于 2020-02-03 11:06:31
问题 Is it possible to configure S3 bucket to return a default object when the requested object is not found/available? I don't want to return any kind of 403 or 404 error. 回答1: [EDITED TO REFLECT COMMENTS BELOW] In standard mode, Amazon S3 can not be configured to return a default object when the requested object is not available. The default behaviour is to return an HTTP 403 when the object does not exist # existing object $ curl -I http://s3-eu-west-1.amazonaws.com/public-sst/wifi.jpg HTTP/1.1