How to generically crawl different websites using Python?

余生颓废 提交于 2019-12-13 08:27:14

问题


I want to extract comments from Dawn.com as well as from Tribune.com from any article.

The way I'm extracting comments is, to target the class <div class="comment__body cf">on Dawn while class="content" on Tribune.com

How can I do it generically? It means, There is no similar pattern on these websites through which this can be achieve by one class.

Shall I write separate code for each website?


回答1:


It is not so easy to write an algorithm that can generically grab the wanted content from a website or something. Because, as you've mentioned, there is no any pattern here. Some can put comments of his site there and give it a class name like comments or site_comments or whatever and some can put it here and give it another class name and so on and so forth. So what I think is you need to figure out the class names or whatever you want to select to scrap the content of the website.

Nevertheless, in your case, if you don't want to write separate code for them I think that you can use BeautifulSoup's regex functionality.

For example you can do something like this:

from bs4 import BeautifulSoup
import requests

site_urls = [first_site, second_site]
for site in site_urls:
    # this is just an example and in real life situations 
    # you should do some error checking
    site_content = requests.get(site)
    soup = BeautifulSoup(site_content, 'html5lib')
    # this is the list of html tags with the current site's comments 
    # and you can do whatever you want with them
    comments = soup.find_all(class_=re.compile("(comment)|(content)"))

They have a very nice documentation here. You should check it.



来源:https://stackoverflow.com/questions/38301865/how-to-generically-crawl-different-websites-using-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!