How do I download a large file (via HTTP) in .NET?

白昼怎懂夜的黑 提交于 2019-12-03 05:40:42

问题


I need to download a large file (2 GB) over HTTP in a C# console application. Problem is, after about 1.2 GB, the application runs out of memory.

Here's the code I'm using:

WebClient request = new WebClient();
request.Credentials = new NetworkCredential(username, password);
byte[] fileData = request.DownloadData(baseURL + fName);

As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a file on disk.

How could I do this?


回答1:


If you use WebClient.DownloadFile you could save it directly into a file.




回答2:


The WebClient class is the one for simplified scenarios. Once you get past simple scenarios (and you have), you'll have to fall back a bit and use WebRequest.

With WebRequest, you'll have access to the response stream, and you'll be able to loop over it, reading a bit and writing a bit, until you're done.


Example:

public void MyDownloadFile(Uri url, string outputFilePath)
{
    const int BUFFER_SIZE = 16 * 1024;
    using (var outputFileStream = File.Create(outputFilePath, BUFFER_SIZE))
    {
        var req = WebRequest.Create(url);
        using (var response = req.GetResponse())
        {
            using (var responseStream = response.GetResponseStream())
            {
                var buffer = new byte[BUFFER_SIZE];
                int bytesRead;
                do
                {
                    bytesRead = responseStream.Read(buffer, 0, BUFFER_SIZE);
                    outputFileStream.Write(buffer, 0, bytesRead);
                } while (bytesRead > 0);
            }
        }
    }
}

Note that if WebClient.DownloadFile works, then I'd call it the best solution. I wrote the above before the "DownloadFile" answer was posted. I also wrote it way too early in the morning, so a grain of salt (and testing) may be required.




回答3:


You need to get the response stream and then read in blocks, writing each block to a file to allow memory to be reused.

As you have written it, the whole response, all 2GB, needs to be in memory. Even on a 64bit system that will hit the 2GB limit for a single .NET object.


Update: easier option. Get WebClient to do the work for you: with its DownloadFile method which will put the data directly into a file.




回答4:


WebClient.OpenRead returns a Stream, just use Read to loop over the contents, so the data is not buffered in memory but can be written in blocks to a file.




回答5:


i would use something like this




回答6:


The connection can be interrupted, so it is better to download the file in small chunks.

Akka streams can help download file in small chunks from a System.IO.Stream using multithreading. https://getakka.net/articles/intro/what-is-akka.html

The Download method will append the bytes to the file starting with long fileStart. If the file does not exist, fileStart value must be 0.

using Akka.Actor;
using Akka.IO;
using Akka.Streams;
using Akka.Streams.Dsl;
using Akka.Streams.IO;

private static Sink<ByteString, Task<IOResult>> FileSink(string filename)
{
    return Flow.Create<ByteString>()
        .ToMaterialized(FileIO.ToFile(new FileInfo(filename), FileMode.Append), Keep.Right);
}

private async Task Download(string path, Uri uri, long fileStart)
{
    using (var system = ActorSystem.Create("system"))
    using (var materializer = system.Materializer())
    {
       HttpWebRequest request = WebRequest.Create(uri) as HttpWebRequest;
       request.AddRange(fileStart);

       using (WebResponse response = request.GetResponse())
       {
           Stream stream = response.GetResponseStream();

           await StreamConverters.FromInputStream(() => stream, chunkSize: 1024)
               .RunWith(FileSink(path), materializer);
       }
    }
}


来源:https://stackoverflow.com/questions/1078523/how-do-i-download-a-large-file-via-http-in-net

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!