I know about uploading in chunks, do we have to do something on receiving end?

巧了我就是萌 提交于 2020-01-03 05:19:18

问题


my azure function receives large video files and images and stores it in Azure blob. Client API is sending data in chunks to my Azure htttp trigger function. Do I have to do something at receiving-end to improve performance like receiving data in chunks?

Bruce, actually Client code is being developed by some other team. right now i am testing it by postman and getting files from multipart http request.

foreach (HttpContent ctnt in provider.Contents)  {

                var dataStream = await ctnt.ReadAsStreamAsync();
 if (ctnt.Headers.ContentDisposition.Name.Trim().Replace("\"", "") == "file")
               {                
                        byte[] ImageBytes = ReadFully(dataStream);
                        var fileName = WebUtility.UrlDecode(ctnt.Headers.ContentDisposition.FileName);                         

              } }

ReadFully Function

 public static byte[] ReadFully(Stream input){
using (MemoryStream ms = new MemoryStream())
{
    input.CopyTo(ms);
    return ms.ToArray();
}}

回答1:


As BlobRequestOptions.ParallelOperationThread states as follows:

Gets or sets the number of blocks that may be simultaneously uploaded.

Remarks:

When using the UploadFrom* methods on a blob, the blob will be broken up into blocks. Setting this value limits the number of outstanding I/O "put block" requests that the library will have in-flight at a given time. Default is 1 (no parallelism). Setting this value higher may result in faster blob uploads, depending on the network between the client and the Azure Storage service. If blobs are small (less than 256 MB), keeping this value equal to 1 is advised.

I assumed that you could explicitly set the ParallelOperationThreadCount for faster blob uploading.

var requestOption = new BlobRequestOptions()
{
    ParallelOperationThreadCount = 5 //Gets or sets the number of blocks that may be simultaneously uploaded.
};

//upload a blob from the local file system
await blockBlob.UploadFromFileAsync("{your-file-path}",null,requestOption,null);

//upload a blob from the stream
await blockBlob.UploadFromStreamAsync({stream-for-upload},null,requestOption,null);

foreach (HttpContent ctnt in provider.Contents)

Based on your code, I assumed that you retrieve the provider instance as follows:

MultipartMemoryStreamProvider provider = await request.Content.ReadAsMultipartAsync();

At this time, you could use the following code for uploading your new blob:

var blobname = ctnt.Headers.ContentDisposition.FileName.Trim('"');
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobname);
//set the content-type for the current blob
blockBlob.Properties.ContentType = ctnt.Headers.ContentType.MediaType;
await blockBlob.UploadFromStreamAsync(await ctnt.Content.ReadAsStreamAsync(), null,requestOption,null);

I would prefer use MultipartFormDataStreamProvider which would store the uploaded files from the client to the file system instead of MultipartMemoryStreamProvider which would use the server memory for temporarily storing the data sent from the client. For the MultipartFormDataStreamProvider approach, you could follow this similar issue. Moreover, I would prefer use the Azure Storage Client Library with my Azure function, you could follow Get started with Azure Blob storage using .NET.

UPDATE:

Moreover, you could follow this tutorial about breaking a large file into small chunks, upload them in the client side, then merge them back in your server side.



来源:https://stackoverflow.com/questions/48556716/i-know-about-uploading-in-chunks-do-we-have-to-do-something-on-receiving-end

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!