Extracting <tr> values from multiple html files

回眸只為那壹抹淺笑 提交于 2019-12-14 03:26:06

问题


I am new to web-scrapping. I have 3000+ html/htm files and I need to extract "tr" values from them and transform in a dataframe to do further analysis.

Codes which I have used is:

html <- list.files(pattern="\\.(htm|html)$")

mydata <- lapply(html,read_html)%>%
html_nodes("tr")%>%
html_text()

Error in UseMethod("xml_find_all") : no applicable method for 'xml_find_all' applied to an object of class "character"

What I am doing wrong?

To extract in a dataframe, i have this code

u <- as.data.frame(matrix(mydata,byrow = TRUE),stringsAsFactors = FALSE)

Thank you in advance.


回答1:


lapply will output a list of documents. That cant be handled by read_html. Instead include all rvest actions in lapply:

html <- list.files(pattern="\\.(htm|html)$")

mydata <- lapply(html, function(file) {
  read_html(file) %>% html_nodes('tr') %>% html_text()
})

Example

Having two test files in my WD with content

<html>
  <head></head>
  <body>
    <table>
      <tr><td>Martin</td></tr>
    </table>
  </body>
</html>

and

<html>
  <head></head>
  <body>
    <table>
      <tr><td>Carolin</td></tr>
    </table>
  </body>
</html>

would output

> mydata
[[1]]
[1] "Martin"

[[2]]
[1] "Carolin"

In my case I could then format it using

data.frame(Content = unlist(mydata))


来源:https://stackoverflow.com/questions/45460261/extracting-tr-values-from-multiple-html-files

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!