Tracking Upload Progress of File to S3 Using Ruby aws-sdk

徘徊边缘 提交于 2019-12-12 08:39:56

问题


Firstly, I am aware that there are quite a few questions that are similar to this one in SO. I have read most, if not all of them, over the past week. But I still can't make this work for me.

I am developing a Ruby on Rails app that allows users to upload mp3 files to Amazon S3. The upload itself works perfectly, but a progress bar would greatly improve user experience on the website.

I am using the aws-sdk gem which is the official one from Amazon. I have looked everywhere in its documentation for callbacks during the upload process, but I couldn't find anything.

The files are uploaded one at a time directly to S3 so it doesn't need to load it into memory. No multiple file upload necessary either.

I figured that I may need to use JQuery to make this work and I am fine with that. I found this that looked very promising: https://github.com/blueimp/jQuery-File-Upload And I even tried following the example here: https://github.com/ncri/s3_uploader_example

But I just could not make it work for me.

The documentation for aws-sdk also BRIEFLY describes streaming uploads with a block:

  obj.write do |buffer, bytes|
     # writing fewer than the requested number of bytes to the buffer
     # will cause write to stop yielding to the block
  end

But this is barely helpful. How does one "write to the buffer"? I tried a few intuitive options that would always result in timeouts. And how would I even update the browser based on the buffering?

Is there a better or simpler solution to this?

Thank you in advance. I would appreciate any help on this subject.


回答1:


The "buffer" object yielded when passing a block to #write is an instance of StringIO. You can write to the buffer using #write or #<<. Here is an example that uses the block form to upload a file.

file = File.open('/path/to/file', 'r')

obj = s3.buckets['my-bucket'].objects['object-key']
obj.write(:content_length => file.size) do |buffer, bytes|
  buffer.write(file.read(bytes))
  # you could do some interesting things here to track progress
end

file.close



回答2:


After read the source code of the AWS gem, I've adapted (or mostly copy) the multipart upload method to yield the current progress based on how many chunks have been uploaded

s3 = AWS::S3.new.buckets['your_bucket']

file = File.open(filepath, 'r', encoding: 'BINARY')
file_to_upload = "#{s3_dir}/#{filename}"
upload_progress = 0

opts = {
  content_type: mime_type,
  cache_control: 'max-age=31536000',
  estimated_content_length: file.size,
}

part_size = self.compute_part_size(opts)

parts_number = (file.size.to_f / part_size).ceil.to_i
obj          = s3.objects[file_to_upload]

begin
    obj.multipart_upload(opts) do |upload|
      until file.eof? do
        break if (abort_upload = upload.aborted?)

        upload.add_part(file.read(part_size))
        upload_progress += 1.0/parts_number

        # Yields the Float progress and the String filepath from the
        # current file that's being uploaded
        yield(upload_progress, upload) if block_given?
      end
    end
end

The compute_part_size method is defined here and I've modified it to this:

def compute_part_size options

  max_parts = 10000
  min_size  = 5242880 #5 MB
  estimated_size = options[:estimated_content_length]

  [(estimated_size.to_f / max_parts).ceil, min_size].max.to_i

end

This code was tested on Ruby 2.0.0p0



来源:https://stackoverflow.com/questions/12085751/tracking-upload-progress-of-file-to-s3-using-ruby-aws-sdk

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!