amazon-s3

Save AWS Polly mp3 file to S3

旧巷老猫 提交于 2020-07-08 03:20:20
问题 I am trying to send some text to AWS Polly to convert to speech and then save that mp3 file to S3. That part seems to work now. // Send text to AWS Polly $client_polly = new Aws\Polly\PollyClient([ 'region' => 'us-west-2', 'version' => 'latest', 'credentials' => [ 'key' => $aws_useKey, 'secret' => $aws_secret, ] ]); $text = 'Test. Test. This is a sample text to be synthesized.'; $voice = 'Matthew'; $result_polly = $client_polly->startSpeechSynthesisTask([ 'Text' => $text, 'TextType' => 'text'

Save AWS Polly mp3 file to S3

為{幸葍}努か 提交于 2020-07-08 03:17:31
问题 I am trying to send some text to AWS Polly to convert to speech and then save that mp3 file to S3. That part seems to work now. // Send text to AWS Polly $client_polly = new Aws\Polly\PollyClient([ 'region' => 'us-west-2', 'version' => 'latest', 'credentials' => [ 'key' => $aws_useKey, 'secret' => $aws_secret, ] ]); $text = 'Test. Test. This is a sample text to be synthesized.'; $voice = 'Matthew'; $result_polly = $client_polly->startSpeechSynthesisTask([ 'Text' => $text, 'TextType' => 'text'

Save AWS Polly mp3 file to S3

旧时模样 提交于 2020-07-08 03:17:12
问题 I am trying to send some text to AWS Polly to convert to speech and then save that mp3 file to S3. That part seems to work now. // Send text to AWS Polly $client_polly = new Aws\Polly\PollyClient([ 'region' => 'us-west-2', 'version' => 'latest', 'credentials' => [ 'key' => $aws_useKey, 'secret' => $aws_secret, ] ]); $text = 'Test. Test. This is a sample text to be synthesized.'; $voice = 'Matthew'; $result_polly = $client_polly->startSpeechSynthesisTask([ 'Text' => $text, 'TextType' => 'text'

Fastest and Efficient way to upload to S3 using FileInputStream

感情迁移 提交于 2020-07-08 00:39:37
问题 I am trying to upload huge files to S3 using BufferedInputStream and providing it with a Buffer size of 5MB but the performance of the application is hindered because of the network speed/amount the available data to read is limited like mentioned in this answer link (limited to 1MB). This makes me upload 1MB as part size at a time to s3 using UploadPartRequest which increases my time to upload. So, is there any other better and fast way to upload to S3 using FileInputStream as a source. Is

Fastest and Efficient way to upload to S3 using FileInputStream

十年热恋 提交于 2020-07-08 00:35:16
问题 I am trying to upload huge files to S3 using BufferedInputStream and providing it with a Buffer size of 5MB but the performance of the application is hindered because of the network speed/amount the available data to read is limited like mentioned in this answer link (limited to 1MB). This makes me upload 1MB as part size at a time to s3 using UploadPartRequest which increases my time to upload. So, is there any other better and fast way to upload to S3 using FileInputStream as a source. Is

Transfer file from AWS S3 to SFTP using Boto 3

不打扰是莪最后的温柔 提交于 2020-07-06 11:29:08
问题 I am a beginner in using Boto3 and I would like to transfer a file from an S3 bucket to am SFTP server directly. My final goal is to write a Python script for AWS Glue. I have found some article which shows how to transfer a file from an SFTP to an S3 bucket: https://medium.com/better-programming/transfer-file-from-ftp-server-to-a-s3-bucket-using-python-7f9e51f44e35 Unfortunately I can't find anything which does the opposite action. Do you have any suggestions/ideas? My first wrong attempt is

Copying data from S3 to Redshift hangs

久未见 提交于 2020-07-05 12:56:28
问题 I've been trying to load data into Redshift for the last couple of days with no success. I have provided the correct IAM role to the cluster, I have given access to S3 , I am using the COPY command with either the AWS credentials or the IAM role and so far no success. What can be the reason for this? It has come to the point that I don't have many options left. So the code is pretty basic, nothing fancy there. See below: copy test_schema.test from 's3://company.test/tmp/append.csv.gz' iam

S3 allow public directory listing of parent folder?

前提是你 提交于 2020-07-05 10:06:21
问题 I currently have public permissions for one of my S3 buckets like so: { "Version": "2012-10-17", "Id": "Policy1493660686651", "Statement": [ { "Sid": "Stmt1493660682556", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::polodata/*" } ] } When a user navigates to a specific file like: https://s3-us-west-2.amazonaws.com/polodata/ETHBTC.csv , it prompts the user to download -- which is fine. However, when they navigate to: https://s3-us-west-2.amazonaws

S3 allow public directory listing of parent folder?

╄→гoц情女王★ 提交于 2020-07-05 10:03:47
问题 I currently have public permissions for one of my S3 buckets like so: { "Version": "2012-10-17", "Id": "Policy1493660686651", "Statement": [ { "Sid": "Stmt1493660682556", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::polodata/*" } ] } When a user navigates to a specific file like: https://s3-us-west-2.amazonaws.com/polodata/ETHBTC.csv , it prompts the user to download -- which is fine. However, when they navigate to: https://s3-us-west-2.amazonaws

AWS Glue output file name

时间秒杀一切 提交于 2020-07-05 07:59:31
问题 I am using AWS to transform some JSON files. I have added the files to Glue from S3. The job I have set up reads the files in ok, the job runs successfully, there is a file added to the correct S3 bucket. The issue I have is that I cant name the file - it is given a random name, it is also not given the .JSON extension. How can I name the file and also add the extension to the output? 回答1: Due to the nature of how Spark works, it's not possible to name the file. However, it's possible to