amazon-s3

Is there a way to iterate through s3 object content using a SQL expression?

半城伤御伤魂 提交于 2020-06-28 14:06:49
问题 I would like to iterate through each s3 bucket object and use a sql expression to find all the content that match the sql. I was able to create a python script that lists all the objects inside my bucket. import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket('bucketname') startAfter = 'bucketname/directory' for obj in bucket.objects.all(): print(obj.key) I was also able to create a python script that uses a sql expression to look through the object content. import boto3 S3_BUCKET =

Is there a way to iterate through s3 object content using a SQL expression?

白昼怎懂夜的黑 提交于 2020-06-28 14:06:29
问题 I would like to iterate through each s3 bucket object and use a sql expression to find all the content that match the sql. I was able to create a python script that lists all the objects inside my bucket. import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket('bucketname') startAfter = 'bucketname/directory' for obj in bucket.objects.all(): print(obj.key) I was also able to create a python script that uses a sql expression to look through the object content. import boto3 S3_BUCKET =

Upload HTML file to AWS S3 and then serving it instead of downloading

て烟熏妆下的殇ゞ 提交于 2020-06-28 07:08:29
问题 I am downloading a web page and then I am writing to a file named thisArticle.html , using the below code. var file = fs.createWriteStream("thisArticle.html"); var request = http.get(req.body.url, response => response.pipe(file) ); After that I am trying to read file and uploading to S3, here is the code that I wrote: fs.readFile('thisArticle.html', 'utf8', function(err, html){ if (err) { console.log(err + ""); throw err; } var pathToSave = 'articles/ ' + req.body.title +'.html'; var s3bucket

Upload HTML file to AWS S3 and then serving it instead of downloading

我的未来我决定 提交于 2020-06-28 07:08:13
问题 I am downloading a web page and then I am writing to a file named thisArticle.html , using the below code. var file = fs.createWriteStream("thisArticle.html"); var request = http.get(req.body.url, response => response.pipe(file) ); After that I am trying to read file and uploading to S3, here is the code that I wrote: fs.readFile('thisArticle.html', 'utf8', function(err, html){ if (err) { console.log(err + ""); throw err; } var pathToSave = 'articles/ ' + req.body.title +'.html'; var s3bucket

Optimize row access and transformation in pyspark

一世执手 提交于 2020-06-28 03:58:42
问题 I have a large dataset(5GB) in the form of jason in S3 bucket. I need to transform the schema of the data, and write back the transformed data to S3 using an ETL script. So I use a crawler to detect the schema and load the data in pyspark dataframe, and change the schema. Now I iterate over every row in the dataframe and convert it to dictionary. Remove null columns, and then convert the dictionary to string and write back to S3. Following is the code: #df is the pyspark dataframe columns =

Sending binary data to Amazon S3 (Javascript)

耗尽温柔 提交于 2020-06-27 18:38:07
问题 Amazon S3 interprets my binary data as non-UTF-8 and modifies it when I write to a bucket. Example using the official s3 Javascript client: var png_file = new Buffer( "iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==", "base64" ).toString( "binary" ); s3.putObject( { Bucket: bucket, Key: prefix + file, ContentType: "image/png;charset=utf-8", CacheControl: "public, max-age=31536000", Body: png_file // , ContentLength: png_file

Resize image in node js

最后都变了- 提交于 2020-06-27 17:59:11
问题 I want to resize my images before I upload them to s3 (amazon). I try to use 'resizeImg' function but its dosent work the image upload in standart size, and not in the new size. my code write in node js and then upload to s3-amazon. image name is:beach_life-normal.jpg My code: var AWS = require('aws-sdk'), fs = require('fs'); var express = require("express"); var app = express(); const resizeImg = require('resize-img'); // For dev purposes only AWS.config.update({ accessKeyId: 'key',

Upload Image (PNG file) from Google App Script to S3 Bucket

强颜欢笑 提交于 2020-06-27 16:53:08
问题 Trying to upload a png file using S3-for-Google-Apps-Script to S3 bucket: // get the image blob const imgBlob = UrlFetchApp.fetch('imageUrl').getBlob(); // init S3 instance const s3 = S3.getInstance(awsAccessKeyId, awsSecretKey); // upload the image to S3 bucket s3.putObject(bucketName, 'test.png', imgBlob, { logRequests:true }); File is uploading in S3 but not in perfect way! It's look like: If I download the image and open getting the error : "It may be damaged or use a file format that

next.js export static - S3 - routing fails on page reload

穿精又带淫゛_ 提交于 2020-06-27 16:37:06
问题 I'm deploying a next.js app as a static export, to an s3 bucket configured for static website hosting. I use next's build and export commands to generate the out/ directory and then copy that into my s3 bucket The bucket then contains some files, for simplicity lets say there's just index.html and about.html The problem is when a user hits index.html via www.website.com then navigates to www.website.com/about everything works, but reloading www.website.com/about fails of course. www.website

NoCredentialsError : Unable to locate credentials - python module boto3

£可爱£侵袭症+ 提交于 2020-06-27 09:05:38
问题 I am running django in a python virtual environment( virtualenv ). The django website is served by apache2 from an amazon ec2 instance(ubuntu 16.04). I use boto3 module to write to amazon s3. I installed awscli and ran aws configure and set up my aws access keys correctly. ( I know I configured it correctly, because $ aws s3 ls returns the correct lists of my s3 buckets.) However, when I try to write some objects to s3 from django application, it fails producing the error as described in the