问题
The code for my scrappy is :
import scrapy
class DummymartSpider(scrapy.Spider):
name = 'dummymart'
allowed_domains = ['www.dummymart.com/product']
start_urls = ['https://www.dummymart.net/product/auto-parts--118']
def parse(self, response):
Company = response.xpath('//*[@class="word-wrap item-title"]/text()').extract()
for item in zip(Company):
scraped_info = {
'Company':item[0],
}
yield scraped_info
next_page_url = response.css('li >a::attr(href)').extract_first()
#next_page_url = response.urljoin(next_page_url)
if next_page_url:
yield scrapy.Request(url = next_page_url, callback = self.parse)
The paginated link has following html syntax:
<ul class="pagination">
<li class="active"><a href="#">1 <span class="sr-only">(current)</span></a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=2">2</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=3">3</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=4">4</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=5">5</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=6">6</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=7">7</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=8">8</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=9">9</a></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=10">10</a></li>
<li class="disabled"><span>...</span></li>
<li><a href="https://www.dummy.net/product/auto-parts--118?page=2" aria-label="Next"><span aria-hidden="true">»</span></a></li>
</ul>
The problem is it only scraps the first paginated link and not others. How do i scrap through those two paginated links too? Thanks.
The HTML selector when 2nd page is active:
<ul class="pagination">
<li><a href="https://www.dummy.net/products/new?page=1" aria-label="Prev"><span aria-hidden="true">«</span></a></li>
<li><a href="https://www.dummy.net/products/new?page=1">1</a></li>
<li class="active"><a href="#">2 <span class="sr-only">(current)</span></a></li>
<li><a href="https://www.dummy.net/products/new?page=3">3</a></li>
<li><a href="https://www.dummy.net/products/new?page=4">4</a></li>
<li><a href="https://www.dummy.net/products/new?page=5">5</a></li>
<li><a href="https://www.dummy.net/products/new?page=6">6</a></li>
<li><a href="https://www.dummy.net/products/new?page=7">7</a></li>
<li><a href="https://www.dummy.net/products/new?page=8">8</a></li>
<li><a href="https://www.dummy.net/products/new?page=9">9</a></li>
<li><a href="https://www.dummy.net/products/new?page=10">10</a></li>
<li class="disabled"><span>...</span></li>
<li><a href="https://www.dummy.net/products/new?page=3" aria-label="Next"><span aria-hidden="true">»</span></a></li>
</ul>
回答1:
You can try this method (I'm trying to find a link after current page):
next_page_url = response.xpath('//li[ ./a[@class="curr"] ]/following-sibling::li[1]/a/@href').extract_first()
#next_page_url = response.urljoin(next_page_url)
if next_page_url:
yield scrapy.Request(url = next_page_url, callback = self.parse)
UPDATE According to your new HTML you need this code:
next_page_url = response.xpath('//li/a[@aria-label="Next"]/@href').extract_first()
if next_page_url:
yield scrapy.Request(url = next_page_url, callback = self.parse)
来源:https://stackoverflow.com/questions/51825336/how-to-scrap-paginated-links-in-scrapy