Scrapy - how to manage cookies/sessions

时光总嘲笑我的痴心妄想 提交于 2019-11-26 18:26:01

Three years later, I think this is exactly what you were looking for: http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#std:reqmeta-cookiejar

Just use something like this in your spider's start_requests method:

for i, url in enumerate(urls):
    yield scrapy.Request("http://www.example.com", meta={'cookiejar': i},
        callback=self.parse_page)

And remember that for subsequent requests, you need to explicitly reattach the cookiejar each time:

def parse_page(self, response):
    # do some processing
    return scrapy.Request("http://www.example.com/otherpage",
        meta={'cookiejar': response.meta['cookiejar']},
        callback=self.parse_other_page)
from scrapy.http.cookies import CookieJar
...

class Spider(BaseSpider):
    def parse(self, response):
        '''Parse category page, extract subcategories links.'''

        hxs = HtmlXPathSelector(response)
        subcategories = hxs.select(".../@href")
        for subcategorySearchLink in subcategories:
            subcategorySearchLink = urlparse.urljoin(response.url, subcategorySearchLink)
            self.log('Found subcategory link: ' + subcategorySearchLink), log.DEBUG)
            yield Request(subcategorySearchLink, callback = self.extractItemLinks,
                          meta = {'dont_merge_cookies': True})
            '''Use dont_merge_cookies to force site generate new PHPSESSID cookie.
            This is needed because the site uses sessions to remember the search parameters.'''

    def extractItemLinks(self, response):
        '''Extract item links from subcategory page and go to next page.'''
        hxs = HtmlXPathSelector(response)
        for itemLink in hxs.select(".../a/@href"):
            itemLink = urlparse.urljoin(response.url, itemLink)
            print 'Requesting item page %s' % itemLink
            yield Request(...)

        nextPageLink = self.getFirst(".../@href", hxs)
        if nextPageLink:
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('\nGoing to next search page: ' + nextPageLink + '\n', log.DEBUG)
            cookieJar = response.meta.setdefault('cookie_jar', CookieJar())
            cookieJar.extract_cookies(response, response.request)
            request = Request(nextPageLink, callback = self.extractItemLinks,
                          meta = {'dont_merge_cookies': True, 'cookie_jar': cookieJar})
            cookieJar.add_cookie_header(request) # apply Set-Cookie ourselves
            yield request
        else:
            self.log('Whole subcategory scraped.', log.DEBUG)

I think the simplest approach would be to run multiple instances of the same spider using the search query as a spider argument (that would be received in the constructor), in order to reuse the cookies management feature of Scrapy. So you'll have multiple spider instances, each one crawling one specific search query and its results. But you need to run the spiders yourself with:

scrapy crawl myspider -a search_query=something

Or you can use Scrapyd for running all the spiders through the JSON API.

def parse(self, response):
    # do something
    yield scrapy.Request(
        url= "http://new-page-to-parse.com/page/4/",
        cookies= {
            'h0':'blah',
            'taeyeon':'pretty'
        },
        callback= self.parse
    )
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!