s3

S3 to Redshift : Copy with Access Denied

匿名 (未验证) 提交于 2019-12-03 02:34:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: We previously used to copy files from s3 to Redshift using the COPY command every day, from a bucket with no specific policy. COPY schema.table_staging FROM 's3://our-bucket/X/YYYY/MM/DD/' CREDENTIALS 'aws_access_key_id=xxxxxx;aws_secret_access_key=xxxxxx' CSV GZIP DELIMITER AS '|' TIMEFORMAT 'YYYY-MM-DD HH24:MI:SS'; As we needed to improve the security of our S3 bucket, we added a policy to authorize connections either from our VPC (the one we use for our Redshift cluster) or specific IP address. { "Version": "2012-10-17", "Id":

Apache Spark reads for S3: can't pickle thread.lock objects

匿名 (未验证) 提交于 2019-12-03 02:29:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: So I want my Spark App to read some text from Amazon's S3. I Wrote the following simple script: import boto3 s3_client = boto3 . client ( 's3' ) text_keys = [ "key1.txt" , "key2.txt" ] data = sc . parallelize ( text_keys ). flatMap ( lambda key : s3_client . get_object ( Bucket = "my_bucket" , Key = key )[ 'Body' ]. read (). decode ( 'utf-8' )) When I do data.collect I get the following error: TypeError : can 't pickle thread.lock objects and I don't seem to find any help online. Have perhaps someone managed to solve the above? 回答1

AWS S3 Java: doesObjectExist results in 403: FORBIDDEN

匿名 (未验证) 提交于 2019-12-03 02:29:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm having trouble with my Java program using the AWS SDK to interact with an S3 bucket. This is the code I use to create an S3 client: public S3StorageManager(S3Config config) throws StorageException { BasicAWSCredentials credentials = new BasicAWSCredentials(myAccessKey(), mySecretKey()); AWSStaticCredentialsProvider provider = new AWSStaticCredentialsProvider(credentials); this.s3Client = AmazonS3ClientBuilder .standard() .withCredentials(provider) .withRegion(myRegion) .build(); When I try to download a file, before starting the download

AWS CodePipeline adding artifacts to S3 in less useful format than running steps individually

匿名 (未验证) 提交于 2019-12-03 02:23:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I've set up a CodePipeline with the end goal of having a core service reside on S3 as a private maven repo for other pipelines to rely on. When the core service is updated and pushed to AWS CodeCommit, the pipeline should run, test it, build a jar using a maven docker image, then push the resulting jar to S3 where it can be accessed by other applications as needed. Unfortunately, while the CodeBuild service works exactly how I want it to, uploading XYZCore.jar to /release on the bucket, the automated pipeline itself does not. Instead, it

How to use s3 with Apache spark 2.2 in the Spark shell

匿名 (未验证) 提交于 2019-12-03 02:18:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to load data from an Amazon AWS S3 bucket, while in the Spark shell. I have consulted the following resources: Parsing files from Amazon S3 with Apache Spark How to access s3a:// files from Apache Spark? Hortonworks Spark 1.6 and S3 Cloudera Custom s3 endpoints I have downloaded and unzipped Apache Spark 2.2.0 . In conf/spark-defaults I have the following (note I replaced access-key and secret-key ): spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem spark.hadoop.fs.s3a.access.key=access-key spark.hadoop.fs.s3a.secret

从S3中拷贝或同步文件

送分小仙女□ 提交于 2019-12-03 02:12:02
/*--> */ /*--> */ 从S3 中拷贝以.kml 结尾的文件到本地的/data/videos/test 目录 aws s3 cp s3://***** /data/videos/test ---recursive --exclude "*" --include '*.kml' 同步文件 aws s3 sync s3://***** --exclude '*' --include '2017-09-22*' 来源: https://www.cnblogs.com/mianbaoshu/p/11770817.html

AWS S3 Java SDK - Access Denied

匿名 (未验证) 提交于 2019-12-03 01:47:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to access a bucket and all its object using AWS SDK but while running the code i am getting an error as Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: X), S3 Extended Request ID: Y= Kindly suggest, where i am lacking and why access denied error is occurring although i have taken all following permission to the bucket: s3:GetObject s3:GetObjectVersion s3:GetObjectAcl s3:GetBucketAcl s3:GetBucketCORS s3

How to upload image to AWS S3 in PHP from memory?

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: So I currently have an upload system working using AWS S3 to upload images. Here's the code: //Upload image to S3 $s3 = Aws\S3\S3Client::factory(array('key' => /*mykey*/, 'secret' => /*myskey*/,)); try { $s3->putObject(array( 'Bucket' => "bucketname", 'Key' => $file_name, 'Body' => fopen(/*filelocation*/, 'r+') )); } catch(Exception $e) { //Error } This image can be a jpeg or png, and I want to convert it to a png before uploading. To do this I use: //This is simplified, please don't warn about transparency, etc. $image =

Copying data from S3 to Redshift - Access denied

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: We are having trouble copying files from S3 to Redshift. The S3 bucket in question allows access only from a VPC in which we have a Redshift cluster. We have no problems with copying from public S3 buckets. We tried both, key-based and IAM role based approach, but result is the same: we keep getting 403 Access Denied by S3. Any idea what we are missing? Thanks. EDIT: Queries we use: 1. (using IAM role): copy redshift_table from 's3://bucket/file.csv.gz' credentials 'aws_iam_role=arn:aws:iam::123456789:role/redshift-copyunload' delimiter '|'

How to use react-s3-uploader with reactjs?

匿名 (未验证) 提交于 2019-12-03 01:40:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I am new to reactjs want to upload image on s3, But don't know how it would work... And don't know where will I get the image path come from aws (in which function)? Here is my react code import ApiClient from './ApiClient' ; // where it comes from? function getSignedUrl ( file , callback ) { const client = new ApiClient (); const params = { objectName : file . name , contentType : file . type }; client . get ( '/my/signing/server' , { params }) // what's that url? . then ( data => { callback ( data ); // what should I get in