Pagination using scrapy

前提是你 提交于 2019-12-29 04:28:12

问题


I'm trying to crawl this website: http://www.aido.com/eshop/cl_2-c_189-p_185/stationery/pens.html

I can get all the products in this page, but how do I issue the request for "View More" link at the bottom of the page?

My code till now is:

rules = (
    Rule(SgmlLinkExtractor(restrict_xpaths='//li[@class="normalLeft"]/div/a',unique=True)),
    Rule(SgmlLinkExtractor(restrict_xpaths='//div[@id="topParentChilds"]/div/div[@class="clm2"]/a',unique=True)),
    Rule(SgmlLinkExtractor(restrict_xpaths='//p[@class="proHead"]/a',unique=True)),
    Rule(SgmlLinkExtractor(allow=('http://[^/]+/[^/]+/[^/]+/[^/]+$', ), deny=('/about-us/about-us/contact-us', './music.html',  ) ,unique=True),callback='parse_item'),
)

Any help?


回答1:


First of all, you should take a look at this thread on how to deal with scraping ajax dynamically loaded content: Can scrapy be used to scrape dynamic content from websites that are using AJAX?

So, clicking on "View More" button fires up an XHR request:

http://www.aido.com/eshop/faces/tiles/category.jsp?q=&categoryID=189&catalogueID=2&parentCategoryID=185&viewType=grid&bnm=&atmSize=&format=&gender=&ageRange=&actor=&director=&author=&region=&compProductType=&compOperatingSystem=&compScreenSize=&compCpuSpeed=&compRam=&compGraphicProcessor=&compDedicatedGraphicMemory=&mobProductType=&mobOperatingSystem=&mobCameraMegapixels=&mobScreenSize=&mobProcessor=&mobRam=&mobInternalStorage=&elecProductType=&elecFeature=&elecPlaybackFormat=&elecOutput=&elecPlatform=&elecMegaPixels=&elecOpticalZoom=&elecCapacity=&elecDisplaySize=&narrowage=&color=&prc=&k1=&k2=&k3=&k4=&k5=&k6=&k7=&k8=&k9=&k10=&k11=&k12=&startPrize=&endPrize=&newArrival=&entityType=&entityId=&brandId=&brandCmsFlag=&boutiqueID=&nmt=&disc=&rat=&cts=empty&isBoutiqueSoldOut=undefined&sort=12&isAjax=true&hstart=24&targetDIV=searchResultDisplay

which returns text/html of the next 24 items. Note this hstart=24 parameter: first time you click "View more" it's equal to 24, second time - 48 etc..this should be your lifesaver.

Now, you should simulate these requests in your spider. The recommended way to do this is to instantiate scrapy's Request object providing callback where you'll extract the data.

Hope that helps.



来源:https://stackoverflow.com/questions/16129071/pagination-using-scrapy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!