问题
I am new to web-scrapping. I have 3000+ html/htm files and I need to extract "tr" values from them and transform in a dataframe to do further analysis.
Codes which I have used is:
html <- list.files(pattern="\\.(htm|html)$")
mydata <- lapply(html,read_html)%>%
html_nodes("tr")%>%
html_text()
Error in UseMethod("xml_find_all") : no applicable method for 'xml_find_all' applied to an object of class "character"
What I am doing wrong?
To extract in a dataframe, i have this code
u <- as.data.frame(matrix(mydata,byrow = TRUE),stringsAsFactors = FALSE)
Thank you in advance.
回答1:
lapply
will output a list of documents. That cant be handled by read_html
. Instead include all rvest
actions in lapply
:
html <- list.files(pattern="\\.(htm|html)$")
mydata <- lapply(html, function(file) {
read_html(file) %>% html_nodes('tr') %>% html_text()
})
Example
Having two test files in my WD with content
<html>
<head></head>
<body>
<table>
<tr><td>Martin</td></tr>
</table>
</body>
</html>
and
<html>
<head></head>
<body>
<table>
<tr><td>Carolin</td></tr>
</table>
</body>
</html>
would output
> mydata
[[1]]
[1] "Martin"
[[2]]
[1] "Carolin"
In my case I could then format it using
data.frame(Content = unlist(mydata))
来源:https://stackoverflow.com/questions/45460261/extracting-tr-values-from-multiple-html-files