How to download multiple files using loop in R?

自闭症网瘾萝莉.ら 提交于 2019-12-06 16:20:51

Assuming you want all the data without knowing all of the urls, your questing involves webparsing. Package httr provides useful function for retrieving HTML-code of a given website, which you can parse for links.

Maybe this bit of code is what you're looking for:

library(httr)

base_url = "http://www.censusindia.gov.in/2011census/HLO/" # main website
r <- GET(paste0(base_url, "HL_PCA/Houselisting-housing-HLPCA.html"))
rc = content(r, "text")
rcl = unlist(strsplit(rc, "<a href =\\\""))   # find links
rcl = rcl[grepl("Houselisting-housing-.+?\\.html", rcl)]  # find links to houslistings

names = gsub("^.+?>(.+?)</.+$", "\\1",rcl)              # get names
names = gsub("^\\s+|\\s+$", "", names)          # trim names
links = gsub("^(Houselisting-housing-.+?\\.html).+$", "\\1",rcl)  # get links

# iterate over regions
for(i in 1:length(links)) {
    url_hh = paste0(base_url, "HL_PCA/", links[i])
    if(!url_success(url_hh)) next

    r <- GET(url_hh)
    rc = content(r, "text")
    rcl = unlist(strsplit(rc, "<a href =\\\""))   # find links
  rcl = rcl[grepl(".xlsx", rcl)]  # find links to houslistings

    hh_names = gsub("^.+?>(.+?)</.+$", "\\1",rcl)          # get names
    hh_names = gsub("^\\s+|\\s+$", "", hh_names)          # trim names
    hh_links = gsub("^(.+?\\.xlsx).+$", "\\1",rcl)   # get links

    # iterate over subregions
    for(j in 1:length(hh_links)) {
        url_xlsx = paste0(base_url, "HL_PCA/",hh_links[j])
      if(!url_success(url_xlsx)) next

        filename = paste0(names[i], "_", hh_names[j], ".xlsx")
        download.file(url_xlsx, filename, mode="wb")
    }
}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!