I have a csv file in S3 and I\'m trying to read the header line to get the size (these files are created by our users so they could be almost any size). Is there a way to do
If you want to read multiple files (line by line) with a specific bucket prefix (i.e., in a "subfolder") you can do this:
s3 = boto3.resource('s3', aws_access_key_id='', aws_secret_access_key='')
bucket = s3.Bucket('')
for obj in bucket.objects.filter(Prefix=''):
for line in obj.get()['Body'].read().splitlines():
print(line.decode('utf-8'))
Here lines are bytes so I am decoding them; but if they are already a string, you can skip that.