I\'m trying to download a number of pdf files automagically given a list of urls.
Here\'s the code I have:
HttpWebRequest request = (HttpWebRequest)W
Why not use the WebClient
class?
using (WebClient webClient = new WebClient())
{
webClient.DownloadFile("url", "filePath");
}
Your question asks about WebClient
but your code shows you using Raw HTTP Requests & Resposnses.
Why don't you actually use the System.Net.WebClient ?
using(System.Net.WebClient wc = new WebClient())
{
wc.DownloadFile("http://www.site.com/file.pdf", "C:\\Temp\\File.pdf");
}
private void Form1_Load(object sender, EventArgs e)
{
WebClient webClient = new WebClient();
webClient.DownloadFileCompleted += new AsyncCompletedEventHandler(Completed);
webClient.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged);
webClient.DownloadFileAsync(new Uri("https://www.colorado.gov/pacific/sites/default/files/Income1.pdf"), @"output/" + DateTime.Now.Ticks ("")+ ".pdf", FileMode.Create);
}
private void ProgressChanged(object sender, DownloadProgressChangedEventArgs e)
{
progressBar = e.ProgressPercentage;
}
private void Completed(object sender, AsyncCompletedEventArgs e)
{
MessageBox.Show("Download completed!");
}
}
}
Skip the BinaryReader
and BinaryWriter
and just copy the input stream to the output FileStream
. Briefly
var fileName = "output/" + date.ToString("yyyy-MM-dd") + ".pdf";
using (var stream = File.Create(fileName))
resp.GetResponseStream().CopyTo(stream);