Uploading big files over HTTP

人走茶凉 提交于 2019-11-30 00:35:28

I'm eight months late, but I just stumbled upon this question and was surprised that webDAV wasn't mentioned. You could use the HTTP PUT method to upload, and include a Content-Range header to handle resuming and such. A HEAD request would tell you if the file already exists and how big it is. So perhaps something like this:

1) HEAD the remote file

2) If it exists and size == local size, upload is already done

3) If size < local size, add a Content-Range header to request and seek to the appropriate location in local file.

4) Make PUT request to upload the file (or portion of the file, if resuming)

5) If connection fails during PUT request, start over with step 1

You can also list (PROPFIND) and rename (MOVE) files, and create directories (MKCOL) with dav.

I believe both Apache and Lighttpd have dav extensions.

You need a standard size (say 256k). If your file "abc.txt", uploaded by user x is 78.3MB it would be 313 full chunks and one smaller chunk.

  1. You send a request to upload stating filename and size, as well as number of initial threads.
  2. your php code will create a temp folder named after the IP address and filename,
  3. Your app can then use MULTIPLE connections to send the data in different threads, so you could be sending chunks 1,111,212,313 at the same time (with separate checksums).
  4. your php code saves them to different files and confirms reception after validating the checksum, giving the number of a new chunk to send, or to stop with this thread.
  5. After all thread are finished, you would ask the php to join all the files, if something is missing, it would goto 3

You could increase or decrease the number of threads at will, since the app is controlling the sending.

You can easily show a progress indicator, either a simple progress bar, or something close to downthemall's detailed view of chunks.

libcurl (C api) could be a viable option

-C/--continue-at Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that will be skipped, counting from the beginning of the source file before it is transferred to the destination. If used with uploads, the FTP server command SIZE will not be used by curl. Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out. If this option is used several times, the last one will be used

Google have created a Resumable HTTP Upload protocol. See https://developers.google.com/gdata/docs/resumable_upload

Is reversing the whole proccess an option? I mean, instead of pushing file over to the server make the server pull the file using standard HTTP GET with all bells and whistles (like accept-ranges, etc.).

Maybe the easiest method would be to create an upload page that would accept the filename and range in parameter, such as http://yourpage/.../upload.php?file=myfile&from=123456 and handle resumes in the client (maybe you could add a function to inspect which ranges the server has received)

@ Anton Gogolev Lol, I was just thinking about the same thing - reversing whole thing, making server a client, and client a server. Thx to Roel, why it wouldn't work, is clearer to me now.

@ Roel I would suggest implementing Java uploader [JumpLoader is good, with its JScript interface and even sample PHP server side code]. Flash uploaders suffer badly when it comes to BIIIGGG files :) , in a gigabyte scale that is.

F*EX can upload files up to TB range via HTTP and is able to resume after link failures. It does not exactly meets your needs, because it is written in Perl and needs an UNIX based server, but the clients can be on any operating system. Maybe it is helpful for you nevertheless: http://fex.rus.uni-stuttgart.de/

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!