New member here. Trying to download a large number of files from a website in R (but open to suggestions as well, such as wget.)
From this post, I understand I must
This will download them in batches and take advantage of the speedier simultaneous downloading capabilities of download.file()
if the libcurl
option is available on your installation of R:
library(purrr)
states <- state.abb[1:27]
agencies <- c("AID", "AMBC", "AMTRAK", "APHIS", "ATF", "BBG", "DOJ", "DOT",
"BIA", "BLM", "BOP", "CBFO", "CBP", "CCR", "CEQ", "CFTC", "CIA",
"CIS", "CMS", "CNS", "CO", "CPSC", "CRIM", "CRT", "CSB", "CSOSA",
"DA", "DEA", "DHS", "DIA", "DNFSB", "DOC", "DOD", "DOE", "DOI")
walk(states, function(x) {
map(x, ~sprintf("http://website.gov/%s_%s.zip", ., agencies)) %>%
flatten_chr() -> urls
download.file(urls, basename(urls), method="libcurl")
})