amazon-s3

How to read bucket image from AWS S3 into Sagemaker Jupyter Instance

蓝咒 提交于 2021-01-29 15:33:29
问题 I am very new to AWS and the cloud environment. I am a machine learning engineer, I am planning to build a custom CNN into the AWS environment to predict a given image has an iPhone present or not. What I have done: Step 1: I have created a S3 bucket for iPhone classifier with the below folder structure : Iphone_Classifier > Train > Yes_iphone_images > 1000 images > No_iphone_images > 1000 images > Dev > Yes_iphone_images > 100 images > No_iphone_images > 100 images > Test > 30 random images

How to connect two AWS S3 Buckets from spring boot application

筅森魡賤 提交于 2021-01-29 14:46:46
问题 I want to connect two S3 Buckets from spring boot Application. I make two different beans with different credentials and make one @primary now my application runs properly but when I tried to access second bucket which is not @primary It gives me 403 Access Denied Exception com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Below is my code any help will be highly appreciated Thanks in Advance import com.amazonaws

Can not copy data from s3 to redshift cluster in a private subnet

烈酒焚心 提交于 2021-01-29 14:31:38
问题 I have set up a redshift cluster in a private subnet. I can successfully connect to my redshift cluster and do basic SQL queries through DBeaver. I need to upload some file from s3 to redshift as well, so I set up a s3 gateway in my private subnet and updated the route table for my private subnet to add the required route as follow: Destination Target Status Propagated 192.168.0.0/16 local active No pl-7ba54012 (com.amazonaws.us-east-2.s3, 52.219.80.0/20, 3.5.128.0/21, 52.219.96.0/20, 52.92

Lambda Function to write to csv and upload to S3

こ雲淡風輕ζ 提交于 2021-01-29 13:44:08
问题 I have a Python Script that gets the details of the unused security groups. I want that to write into a CSV file and upload to S3 Bucket. When I test it in local machine it writes to CSV in the local machine. But when I execute that as a lambda function, it needs a place to save the CSV. So I am using s3. import boto3 import csv ses = boto3.client('ses') def lambda_handler(event, context): with open('https://unused******- 1.amazonaws.com/Unused.csv', 'w') as csvfile: writer = csv.writer

List s3 buckets with its size in csv format

谁说胖子不能爱 提交于 2021-01-29 13:43:09
问题 I am trying to list the s3 buckets with its size in csv. Bucket Name Size Bucket A 2 GB Bucket B 10 GB Looking for something like this... I can list the buckets with the below code. def main(): with open('size.csv', 'w') as csvfile: writer = csv.writer(csvfile) writer.writerow([ 'Bucket Name', 'Bucket Size' ]) with open('accountroles.json') as ec2_file: ec2_data = json.load(ec2_file) region_list = ['us-west-1'] for region in region_list: for index in range(len(ec2_data['Items'])): Account

Accessing AWS S3 from within google GCP

泪湿孤枕 提交于 2021-01-29 12:59:51
问题 We were doing most of our cloud processing (and still do) using AWS. However, we also now have some credits on GCP and would like to use and want to explore interoperability between the cloud providers. In particular, I was wondering if it is possible to use AWS S3 from within GCP. I am not talking about migrating the data but whether there is some API which will allow AWS S3 to work seamlessly from within GCP. We have a lot of data and databases that are hosted on AWS S3 and would prefer to

Is there any direct way to copy one s3 directory to another in java or scala?

时间秒杀一切 提交于 2021-01-29 11:32:33
问题 I want to archive all the files and sub directories in a s3 directory to some other s3 location using java. Is there any direct way to copy one s3 directory to another in java or scala? 回答1: There is no API call to operate on whole directories in Amazon S3. In fact, directories/folders do not exist in Amazon S3. Rather, each object stores the full path in its filename ( Key ). If you wish to copy multiple objects that have the same prefix in their Key, your code will need to loop through the

Is it possible to compress files which are already in AWS S3?

泄露秘密 提交于 2021-01-29 11:06:52
问题 I have a S3 bucket which has wide variety of files. Some of the files are of huge size like 8Gb, 11GB. Biggest one is of 14.6GB. I was searching for the way compress them. Obviously, i can download them locally and compress them and put it back in bucket. I thought its not good way to achieve it as i have to download the files first which is time consuming process. Is there any way in AWS cloud services itself using which i can compress the files directly and put them back in S3? One of the

Can The AWS CLI Copy From S3 To EC2?

雨燕双飞 提交于 2021-01-29 10:58:37
问题 I'm familiar with running the AWS CLI command to copy from a folder to S3 or from one S3 bucket to another S3 bucket: aws s3 cp ./someFile.txt s3://bucket/someFile.txt aws s3 cp s3://bucketSource/someFile.txt s3://bucketDestination/someFile.txt But is it possible to copy files from S3 to an EC2-Instance when you're not on the EC2-Instance? Something like: aws s3 cp s3://bucket/folder/ ec2-user@1.2.3.4:8080/some/folder/ I'm trying to run this from Jenkins which is why I can't simply run the

Spring Batch - Read a byte stream, process, write to 2 different csv files convert them to Input stream and store it to ECS and then write to Database

心不动则不痛 提交于 2021-01-29 10:57:37
问题 I have a requirement where we receive a csv file in the form of byte stream through ECS S3 Pre-Signed url. I have to validate the data and write the validation successful and failed records to 2 different csv files and store them to ECS S3 bucket by converting them to InputStream. Also write the successful records to database and also the pre-signed urls of the inbound, success and failure files. I'm new to Spring Batch. How should I approach this requirement? If I choose a FlatFileItemReader