amazon-s3

How Can I Write Logs Directly to AWS S3 from Memory Without First Writing to stdout? (Python, boto3)

两盒软妹~` 提交于 2020-05-13 05:13:08
问题 I'm trying to write Python log files directly to S3 without first saving them to stdout. I want the log files to be written to S3 automatically when the program is done running. I'd like to use the boto3 put_object method: import atexit import logging import boto3 def write_logs(body, bucket, key): s3 = boto3.client("s3") s3.put_object(Body=body, Bucket=bucket, Key=key) log = logging.getLogger("some_log_name") log.info("Hello S3") atexit.register(write_logs, body=log, bucket="bucket_name",

How Can I Write Logs Directly to AWS S3 from Memory Without First Writing to stdout? (Python, boto3)

梦想与她 提交于 2020-05-13 05:13:00
问题 I'm trying to write Python log files directly to S3 without first saving them to stdout. I want the log files to be written to S3 automatically when the program is done running. I'd like to use the boto3 put_object method: import atexit import logging import boto3 def write_logs(body, bucket, key): s3 = boto3.client("s3") s3.put_object(Body=body, Bucket=bucket, Key=key) log = logging.getLogger("some_log_name") log.info("Hello S3") atexit.register(write_logs, body=log, bucket="bucket_name",

Using S3 as static web page and EC2 as REST API for it together? (AWS)

只谈情不闲聊 提交于 2020-05-13 04:45:26
问题 I found this link that talks about separating static data and a web api into a static s3 web server and a bean stalk application for an api and an ec2 web server to create a website. The answer from Charles is accurate, CORS is how you address the problem with moving between the two domains. How to use S3 as static web page and EC2 as REST API for it together? (AWS) The question I have is why you would do this? Some of my thoughts are: Advantage - We use node as the web server for the api and

Using S3 as static web page and EC2 as REST API for it together? (AWS)

邮差的信 提交于 2020-05-13 04:45:07
问题 I found this link that talks about separating static data and a web api into a static s3 web server and a bean stalk application for an api and an ec2 web server to create a website. The answer from Charles is accurate, CORS is how you address the problem with moving between the two domains. How to use S3 as static web page and EC2 as REST API for it together? (AWS) The question I have is why you would do this? Some of my thoughts are: Advantage - We use node as the web server for the api and

How to find size of a folder inside an S3 bucket?

ε祈祈猫儿з 提交于 2020-05-13 04:39:14
问题 I am using boto3 module in python to interact with S3 and currently I'm able to get the size of every individual key in an S3 bucket. But my motive is to find the space storage of only the top level folders (every folder is a different project) and we need to charge per project for the space used. I'm able to get the names of the top level folders but not getting any details about the size of the folders in the below implementation. The following is my implementation to get the top level

How to find size of a folder inside an S3 bucket?

允我心安 提交于 2020-05-13 04:37:31
问题 I am using boto3 module in python to interact with S3 and currently I'm able to get the size of every individual key in an S3 bucket. But my motive is to find the space storage of only the top level folders (every folder is a different project) and we need to charge per project for the space used. I'm able to get the names of the top level folders but not getting any details about the size of the folders in the below implementation. The following is my implementation to get the top level

AWS Python Lambda Function - Upload File to S3

巧了我就是萌 提交于 2020-05-11 05:33:45
问题 I have an AWS Lambda function written in Python 2.7 in which I want to: 1) Grab an .xls file form an HTTP address. 2) Store it in a temp location. 3) Store the file in an S3 bucket. My code is as follows: from __future__ import print_function import urllib import datetime import boto3 from botocore.client import Config def lambda_handler(event, context): """Make a variable containing the date format based on YYYYYMMDD""" cur_dt = datetime.datetime.today().strftime('%Y%m%d') """Make a variable

spark streaming checkpoint recovery is very very slow

此生再无相见时 提交于 2020-05-10 07:23:07
问题 Goal: Read from Kinesis and store data in to S3 in Parquet format via spark streaming. Situation: Application runs fine initially, running batches of 1hour and the processing time is less than 30 minutes on average. For some reason lets say the application crashes, and we try to restart from checkpoint. The processing now takes forever and does not move forward. We tried to test out the same thing at batch interval of 1 minute, the processing runs fine and takes 1.2 minutes for batch to

collecstatic does not push to files S3

不羁的心 提交于 2020-05-10 03:37:34
问题 EDIT: I have found that removing import django_heroku from my settings.py file allows me to push my static files to my AWS bucket. When I uncomment import django_heroku , collectstatic then pushes my files to the staticfiles folder. manage.py collectstatic with #import django_heroku : You have requested to collect static files at the destination location as specified in your settings. maange.py collectstatic with import django_heroku : You have requested to collect static files at the

collecstatic does not push to files S3

℡╲_俬逩灬. 提交于 2020-05-10 03:34:20
问题 EDIT: I have found that removing import django_heroku from my settings.py file allows me to push my static files to my AWS bucket. When I uncomment import django_heroku , collectstatic then pushes my files to the staticfiles folder. manage.py collectstatic with #import django_heroku : You have requested to collect static files at the destination location as specified in your settings. maange.py collectstatic with import django_heroku : You have requested to collect static files at the