amazon-s3

Upload CSV stream from Ruby to S3

这一生的挚爱 提交于 2020-07-05 06:40:30
问题 I am dealing with potentially huge CSV files which I want to export from my Rails app, and since it runs on Heroku, my idea was to stream these CSV files directly to S3 when generating them. Now, I have an issue, in that Aws::S3 expects a file in order to be able to perform an upload, while in my Rails app I would like to do something like: S3.bucket('my-bucket').object('my-csv') << %w(this is one line) How can I achieve this? 回答1: You can use s3 multipart upload that allows upload by

Local ffmpeg output to S3 Bucket

為{幸葍}努か 提交于 2020-07-05 03:32:30
问题 Heres my setup; - I have a local PC running ffmpeg with output configured to h.264 and aac - and S3 bucket created at AWS what i need to do is, use ffmpeg [local] output to upload files directly to s3 bucket. PS: Planing to use that s3 bucket with cloudfront to allow 1 [one] user to stream a live event with about setup. i could not find a way to specify output location as s3 bucket [with key]. any ideas as to how to do it? Thanks 回答1: You can: Mount the S3 bucket using S3FS FUSE and then you

Local ffmpeg output to S3 Bucket

邮差的信 提交于 2020-07-05 03:32:29
问题 Heres my setup; - I have a local PC running ffmpeg with output configured to h.264 and aac - and S3 bucket created at AWS what i need to do is, use ffmpeg [local] output to upload files directly to s3 bucket. PS: Planing to use that s3 bucket with cloudfront to allow 1 [one] user to stream a live event with about setup. i could not find a way to specify output location as s3 bucket [with key]. any ideas as to how to do it? Thanks 回答1: You can: Mount the S3 bucket using S3FS FUSE and then you

Use CDN with carrierwave + fog in s3 + cloudfront with rails 3.1

故事扮演 提交于 2020-07-04 06:40:49
问题 I'm using fog with carrierwave in my website. But the images load very very slowly. Then I want to speed up loading of images with a CDN. I have followed this tutorial for create the CDN for images: http://maketecheasier.com/configure-amazon-s3-as-a-content-delivery-network/2011/06/25 I have now my distribution deployed for images but I don't know how works fine the cdn. I have in initializers/fog.rb the next configuration: CarrierWave.configure do |config| config.fog_credentials = {

Redirect / route s3 bucket to domain

≯℡__Kan透↙ 提交于 2020-06-29 12:24:46
问题 Hi I am looking for the best way to redirect / route all s3 bucket links to a domain or link domain to s3 bucket. I am migrating from dedicated hosting to aws and using s3 bucket to store files. Example: http://www.example.com.au/app --> directs to http://examplebucket.s3.amazonaws.com/app But i have heaps of them and hybrid apps that point to domain/files so not sure if theres a way to route within aws console or php script to achieve? Thanks 回答1: You can point a custom domain name to an

Fetch requests failing on Android to AWS S3 endpoint

你。 提交于 2020-06-29 04:43:04
问题 I am using React Native with Expo SDK 37. My request looks like the following: export const uploadMedia = async (fileData, s3Data) => { console.log(fileData.type) let formData = new FormData(); formData.append('key', s3Data.s3Key); formData.append('Content-Type', fileData.type); formData.append('AWSAccessKeyId', s3Data.awsAccessKey); formData.append('acl', 'public-read'); formData.append('policy', s3Data.s3Policy); formData.append('signature', s3Data.s3Signature); formData.append('file',

Calling stored procedure from aws Glue Script

半世苍凉 提交于 2020-06-29 03:59:36
问题 After the ETL Job is done, What is the best way to call stored procedure in AWS Glue script? I am using PySpark to fetch the data from S3 and storing in staging table. After this process, need to call a stored procedure. This stored procedure loads data from the staging table into the appropriate MDS tables. If I have to call a Stored Procedure after ETL Job is done, what is the best way? If I consider AWS lambda, is there any way that lambda can be notified after the ETL. 回答1: You can use

Calling stored procedure from aws Glue Script

人走茶凉 提交于 2020-06-29 03:59:05
问题 After the ETL Job is done, What is the best way to call stored procedure in AWS Glue script? I am using PySpark to fetch the data from S3 and storing in staging table. After this process, need to call a stored procedure. This stored procedure loads data from the staging table into the appropriate MDS tables. If I have to call a Stored Procedure after ETL Job is done, what is the best way? If I consider AWS lambda, is there any way that lambda can be notified after the ETL. 回答1: You can use

Nested loops in Python and CSV file

时间秒杀一切 提交于 2020-06-29 03:49:13
问题 I have a python lambda with nested for loop def lambda_handler(event, context): acc_ids = json.loads(os.environ.get('ACC_ID')) with open('/tmp/newcsv.csv', mode='w', newline='') as csv_file: fieldnames = ['DomainName', 'Subject', 'Status', 'RenewalEligibility', 'InUseBy'] writer = csv.DictWriter(csv_file, fieldnames=fieldnames) writer.writeheader() for acc_id in acc_ids: try: //do something for region in regions_to_scan: try: // do something if something: for x in list: // get values for row

Is there a way to iterate through s3 object content using a SQL expression?

﹥>﹥吖頭↗ 提交于 2020-06-28 14:07:51
问题 I would like to iterate through each s3 bucket object and use a sql expression to find all the content that match the sql. I was able to create a python script that lists all the objects inside my bucket. import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket('bucketname') startAfter = 'bucketname/directory' for obj in bucket.objects.all(): print(obj.key) I was also able to create a python script that uses a sql expression to look through the object content. import boto3 S3_BUCKET =