Using Scrapy LinkExtractor() to locate specific domain extensions

和自甴很熟 提交于 2020-06-29 11:54:51

问题


I want to use Scrapy's LinkExtractor() to only follow links in the .th domain

I see there is a deny_extensions(list) parameter, but no allow_extensions() parameter.

Given that, how do I restrict links just to allow domains in .th ?


回答1:


deny_extensions is to filter out URLs ending with .gz, .exe and so on.

You are probably looking for allow_domains:

allow_domains (str or list) – a single value or a list of string containing domains which will be considered for extracting the links

deny_domains (str or list) – a single value or a list of strings containing domains which won’t be considered for extracting the links


Edit:

Another option mentioned in my comments is to use a custom LinkExtractor. Below is an example of such a link extractor which does the same thing as the standard link extractor, but additionally filters out links where the domain name does not match a Unix filename pattern (it uses the fnmatch module for this):

from six.moves.urllib.parse import urlparse
import fnmatch
import re

from scrapy.linkextractors import LinkExtractor

class DomainPatternLinkExtractor(LinkExtractor):

    def __init__(self, domain_pattern, *args, **kwargs):
        super(DomainPatternLinkExtractor, self).__init__(*args, **kwargs)
        
        # take a Unix file pattern string and translate
        # it to a regular expression to match domains against
        regex = fnmatch.translate(domain_pattern)
        self.reobj = re.compile(regex)

    def extract_links(self, response):
        return list(
            filter(
                lambda link: self.reobj.search(urlparse(link.url).netloc),
                super(DomainPatternLinkExtractor, self).extract_links(response)
            )
        )

In your case you could use it like this: DomainPatternLinkExtractor('*.th').

Sample scrapy shell session using this link extractor:

$ scrapy shell http://www.dmoz.org/News/Weather/
2016-11-21 17:14:51 [scrapy] INFO: Scrapy 1.2.1 started (bot: issue2401)
(...)
2016-11-21 17:14:52 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/News/Weather/> (referer: None)

>>> from six.moves.urllib.parse import urlparse
>>> import fnmatch
>>> import re
>>> 
>>> from scrapy.linkextractors import LinkExtractor
>>> 
>>> 
>>> class DomainPatternLinkExtractor(LinkExtractor):
... 
...     def __init__(self, domain_pattern, *args, **kwargs):
...         super(DomainPatternLinkExtractor, self).__init__(*args, **kwargs)
...         regex = fnmatch.translate(domain_pattern)
...         self.reobj = re.compile(regex)
...     def extract_links(self, reponse):
...         return list(
...             filter(
...                 lambda link: self.reobj.search(urlparse(link.url).netloc),
...                 super(DomainPatternLinkExtractor, self).extract_links(response)
...             )
...         )
... 
>>> from pprint import pprint


>>> pprint([l.url for l in DomainPatternLinkExtractor('*.co.uk').extract_links(response)])
['http://news.bbc.co.uk/weather/',
 'http://freemeteo.co.uk/',
 'http://www.weatheronline.co.uk/']


>>> pprint([l.url for l in DomainPatternLinkExtractor('*.gov*').extract_links(response)])
['http://www.metoffice.gov.uk/', 'http://www.weather.gov/']


>>> pprint([l.url for l in DomainPatternLinkExtractor('*.name').extract_links(response)])
['http://www.accuweather.name/']


来源:https://stackoverflow.com/questions/40701227/using-scrapy-linkextractor-to-locate-specific-domain-extensions

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!