File upload in chunks using jquery file upload plugin in JAVA

我怕爱的太早我们不能终老 提交于 2020-01-12 15:58:08

问题


Trying to use blueimp JQuery file upload plugin to upload large files( greater than 1 GB). Found using the maxChunkSize configuration allows to do the file upload in chunks from client side. Server we can get chunk size & file name using Content-Range & Content-Disposition headers.

My server is Weblogic and writing server side code in a Servlet.

Here are my questions:

  1. Server side: How to know that the request is last chunk or not ?
  2. Server side How to write all the received chunks data into single file ?
  3. How can I identify chunked requests are related to same file, since each chunk will send as a individual request ?

回答1:


Check the wiki of the plugin on github - it has a section on chunked file uploads.

From the wiki:

The example PHP upload handler supports chunked uploads out of the box.

To support chunked uploads, the upload handler makes use of the Content-Range header, which is transmitted by the plugin for each chunk.

Check the example PHP code linked above.

Server side: How to know that the request is last chunk or not ?

Each Content-Range request header will contain the range of bytes of the file contained in that request as well as the total number of bytes of the file. So, you can check the end value of the range against the total number of bytes to determine if the request contains the last chunk or not.

Check the example given in this section on W3C's website.

Server side How to write all the received chunks data into single file ?

You can collect all the chunks in memory in an array and then write them to a file in one-go - but this can be inefficient for larger files. Java's IO API provides methods to write to sections of files by providing an initial offset. Check this question.

How can I identify chunked requests are related to same file, since each chunk will send as a individual request ?

Check for the Content-Range header in each request - if a request has that header then it is one of the many chunk-upload requests. Using the value of the header, you can figure out which part/section of the file is contained in that request.

Also, the Content-Disposition header in the request will contain the file name using which you can link various requests of the same file.




回答2:


Can I answer by working code? Here are necessary client and server parts. See explanation below.

Client:

<input id="fileupload" type="file" name="files[]" data-url="file.upload" multiple>
<script>
var uuid = function() {
            return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, 
                function(c) {
                    var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
                    return v.toString(16);
                });
        };
$(function () {
    $('#fileupload').fileupload({
        dataType: 'json',
        maxChunkSize: 1000000,
        done: function (e, data) {
            $.each(data.result.files, function (index, file) {
                $('<p/>').text(file.name).appendTo(document.body);
            });
        }
    }).bind('fileuploadsubmit', function (e, data) {
        data.formData = {uploadId: uuid()};
    });
});
</script>

WEB_INF/web.xml:

<!-- ...other servlet blocks... -->

<servlet>
    <servlet-name>fileUploadServlet</servlet-name>
    <servlet-class>your.package.FileUploadServlet</servlet-class>
</servlet>   

<!-- ...other servlet-mapping blocks... -->

<servlet-mapping>
    <servlet-name>fileUploadServlet</servlet-name>
    <url-pattern>/file.upload</url-pattern>
</servlet-mapping> 

Servlet "FileUploadServlet":

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.TreeMap;
import java.util.regex.Pattern;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.commons.fileupload.FileItem;
import org.apache.commons.fileupload.FileItemFactory;
import org.apache.commons.fileupload.disk.DiskFileItemFactory;
import org.apache.commons.fileupload.servlet.ServletFileUpload;

import com.fasterxml.jackson.databind.ObjectMapper;

...

@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
    String range = request.getHeader("Content-Range");
    long fileFullLength = -1;
    long chunkFrom = -1;
    long chunkTo = -1;
    if (range != null) {
        if (!range.startsWith("bytes "))
            throw new ServletException("Unexpected range format: " + range);
        String[] fromToAndLength = range.substring(6).split(Pattern.quote("/"));
        fileFullLength = Long.parseLong(fromToAndLength[1]);
        String[] fromAndTo = fromToAndLength[0].split(Pattern.quote("-"));
        chunkFrom = Long.parseLong(fromAndTo[0]);
        chunkTo = Long.parseLong(fromAndTo[1]);
    }
    File tempDir = new File(System.getProperty("java.io.tmpdir"));  // Configure according 
    File storageDir = tempDir;                                      // project server environment.
    String uploadId = null;
    FileItemFactory factory = new DiskFileItemFactory(10000000, tempDir);
    ServletFileUpload upload = new ServletFileUpload(factory);
    try {
        List<?> items = upload.parseRequest(request);
        Iterator<?> it = items.iterator();
        List<Map<String, Object>> ret = new ArrayList<Map<String,Object>>();
        while (it.hasNext()) {
            FileItem item = (FileItem) it.next();
            if (item.isFormField()) {
                if (item.getFieldName().equals("uploadId"))
                        uploadId = item.getString();
            } else {
                Map<String, Object> fileInfo = new LinkedHashMap<String, Object>();
                File assembledFile = null;
                fileInfo.put("name", item.getName());
                fileInfo.put("type", item.getContentType());
                File dir = new File(storageDir, uploadId);
                if (!dir.exists())
                    dir.mkdir();
                if (fileFullLength < 0) {  // File is not chunked
                    fileInfo.put("size", item.getSize());
                    assembledFile = new File(dir, item.getName());
                    item.write(assembledFile);
                } else {  // File is chunked
                    byte[] bytes = item.get();
                    if (chunkFrom + bytes.length != chunkTo + 1)
                        throw new ServletException("Unexpected length of chunk: " + bytes.length + 
                                " != " + (chunkTo + 1) + " - " + chunkFrom);
                    saveChunk(dir, item.getName(), chunkFrom, bytes, fileFullLength);
                    TreeMap<Long, Long> chunkStartsToLengths = getChunkStartsToLengths(dir, item.getName());
                    long lengthSoFar = getCommonLength(chunkStartsToLengths);
                    fileInfo.put("size", lengthSoFar);
                    if (lengthSoFar == fileFullLength) {
                        assembledFile = assembleAndDeleteChunks(dir, item.getName(), 
                                new ArrayList<Long>(chunkStartsToLengths.keySet()));
                    }
                }
                if (assembledFile != null) {
                    fileInfo.put("complete", true);
                    fileInfo.put("serverPath", assembledFile.getAbsolutePath());
                    // Here you can do something with fully assembled file.
                }
                ret.add(fileInfo);
            }
        }
        Map<String, Object> filesInfo = new LinkedHashMap<String, Object>();
        filesInfo.put("files", ret);
        response.setContentType("application/json");
        response.getWriter().write(new ObjectMapper().writeValueAsString(filesInfo));
        response.getWriter().close();
    } catch (ServletException ex) {
        throw ex;
    } catch (Exception ex) {
        ex.printStackTrace();
        throw new ServletException(ex);
    }
}

private static void saveChunk(File dir, String fileName, 
        long from, byte[] bytes, long fileFullLength) throws IOException {
    File target = new File(dir, fileName + "." + from + ".chunk");
    OutputStream os = new FileOutputStream(target);
    try {
        os.write(bytes);
    } finally {
        os.close();
    }
}

private static TreeMap<Long, Long> getChunkStartsToLengths(File dir, 
        String fileName) throws IOException {
    TreeMap<Long, Long> chunkStartsToLengths = new TreeMap<Long, Long>();
    for (File f : dir.listFiles()) {
        String chunkFileName = f.getName();
        if (chunkFileName.startsWith(fileName + ".") && 
                chunkFileName.endsWith(".chunk")) {
            chunkStartsToLengths.put(Long.parseLong(chunkFileName.substring(
                    fileName.length() + 1, chunkFileName.length() - 6)), f.length());
        }
    }
    return chunkStartsToLengths;
}

private static long getCommonLength(TreeMap<Long, Long> chunkStartsToLengths) {
    long ret = 0;
    for (long len : chunkStartsToLengths.values())
        ret += len;
    return ret;
}

private static File assembleAndDeleteChunks(File dir, String fileName, 
        List<Long> chunkStarts) throws IOException {
    File assembledFile = new File(dir, fileName);
    if (assembledFile.exists()) // In case chunks come in concurrent way
        return assembledFile;
    OutputStream assembledOs = new FileOutputStream(assembledFile);
    byte[] buf = new byte[100000];
    try {
        for (long chunkFrom : chunkStarts) {
            File chunkFile = new File(dir, fileName + "." + chunkFrom + ".chunk");
            InputStream is = new FileInputStream(chunkFile);
            try {
                while (true) {
                    int r = is.read(buf);
                    if (r == -1)
                        break;
                    if (r > 0)
                        assembledOs.write(buf, 0, r);
                }
            } finally {
                is.close();
            }
            chunkFile.delete();
        }
    } finally {
        assembledOs.close();
    }
    return assembledFile;
}

The idea is to write chunks as separate files and to assemble them at the time when all chunk files have common length equal to full file length. On client side you can define use properties at upload start (it's done here for uploadId - unique ID for each file). This uploadId is the same for all chunks of a file. Any questions? Just let me know.

[UPDATE] There are two dependencies: https://commons.apache.org/proper/commons-fileupload/ and https://github.com/FasterXML/jackson .



来源:https://stackoverflow.com/questions/32390775/file-upload-in-chunks-using-jquery-file-upload-plugin-in-java

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!