how to scrape all pages (1,2,3,…n) from a website using r vest

泪湿孤枕 提交于 2019-12-13 06:15:55

问题


# I would like to read the list of .html files to extract data. Appreciate your help.

library(rvest)
library(XML)
library(stringr)
library(data.table)
library(RCurl)

u0 <- "https://www.r-users.com/jobs/"
u1 <- read_html("https://www.r-users.com/jobs/")
download_folder <- ("C:/R/BNB/")
pages <- html_text(html_node(u1, ".results_count"))
Total_Pages <- substr(pages, 4, 7)
TP <- as.numeric(Total_Pages)
# reading first two pages, writing them as separate .html files
for (i in 1:TP) {
  url <- paste(u0, "page=/", i, sep = "")
  download.file(url, paste(download_folder, i, ".html", sep = ""))
  #create html object
  html <- html(paste(download_folder, i, ".html", sep = ""))
}

回答1:


Here is a potential solution:

library(rvest)
library(stringr)

u0 <- "https://www.r-users.com/jobs/"
u1 <- read_html("https://www.r-users.com/jobs/")
download_folder <- getwd()  #note change in output directory

TP<-max(as.integer(html_text(html_nodes(u1,"a.page-numbers"))), na.rm=TRUE)

# reading first two pages, writing them as separate .html files 
for (i in 1:TP ) {
  url <- paste(u0,"page/",i, "/", sep="")
  print(url)
  download.file(url,paste(download_folder,i,".html",sep=""))
  #create html object
  html <- read_html(paste(download_folder,i,".html",sep=""))
}

I could not find the class .result-count in the html, so instead I looked for the page-numbers class and pick the highest returned value. Also, the function html is deprecated thus I replaced it with read_html. Good luck



来源:https://stackoverflow.com/questions/39129125/how-to-scrape-all-pages-1-2-3-n-from-a-website-using-r-vest

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!