问题
I followed the instructions from this page http://docs.scrapy.org/en/latest/intro/tutorial.html
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('span small::text').get(),
'tags': quote.css('div.tags a.tag::text').getall(),
}
next_page = response.css('li.next a::attr(href)').get()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)
The above example works for their pages
<ul class="pager">
<li class="next">
<a href="/page/2/">Next <span aria-hidden="true">→/span></a>
</li>
</ul>
I now want to change the response.follow to search a page which contains some links in this format
Page 1
<div class="pages-list">
<ul class="page">
<li class="page-current">1</li>
<li class="page-item"><a title="Page 2" href="/url2">2</a></li>
<li class="page-item"><a title="Page 3" href="/url3">3</a></li>
Page 2 and so on
<div class="pages-list">
<ul class="page">
<li class="page-item"><a title="Page 1" href="/url1">1</a></li>
<li class="page-current">2</li>
<li class="page-item"><a title="Page 3" href="/url3">3</a></li>
and tried different variations to get the next page starting from the first page
I cannot see anything wrong but my code only checks the first page and then stops
next_page = response.css('li.page-current a::attr(href)').get()
or
next_page = response.css('li.page-current li a::attr(href)').get()
Both don't work, please advise, after page 1, will want to check page 2, then page 3, etc.
回答1:
Pretty easy with XPath:
next_page = response.xpath('//li[@class="page-current"]/following-sibling::li[1]/a/@href').get()
回答2:
Try : relative_url = response.xpath('//li[@class="next"]/a/@href').get()
In scrapy shell that gives : '/page/2/'
Also: You can use urljoin to concatentate with http://quotes.toscrape.com if need be, as follows:
from urllib.parse import urljoin
domain = 'http://quotes.toscrape.com'
url = urljoin(domain, relative_url)
And then use the url variable as per :
yield response.follow(url, callback=self.parse)
来源:https://stackoverflow.com/questions/57741674/scrapy-response-follow-query