how to deal with escaped_fragment using scrapy

不想你离开。 提交于 2021-02-08 07:35:23

问题


recently i used scrapy to scrape zoominfo then i test the below url

http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile

but some how in terminal, it changed like this

[scrapy] DEBUG: Crawled (200) <GET http://subscriber.zoominfo.com/zoominfo/?_escaped_fragment_=search%2Fprofile%2Fperson%3FpersonId%3D521850874%26targetid%3Dprofile>

I have added AJAXCRAWL_ENABLED = True in setting.py but the url still has escaped_fragment. I doubt that i haven't entered the right page i want.

The spider.py code is below:

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import scrapy
from scrapy.selector import Selector
from scrapy.http import Request, FormRequest
from tutorial.items import TutorialItem
from scrapy.spiders.init import InitSpider


class LoginSpider(InitSpider):
    name = 'zoominfo'
    login_page = 'https://www.zoominfo.com/login'
    start_urls = [
    'http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile',
    ]
    headers = {
        "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "Accept-Encoding":"gzip, deflate, br",
        "Accept-Language":"en-US,en;q=0.5",
        "Connectionc":"keep-alive",
        "User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:50.0) Gecko/20100101 Firefox/50.0",
    }   
    def init_request(self):
        return Request(url=self.login_page, callback=self.login)

    def login(self, response):
        print "Preparing Login"
        return FormRequest.from_response(
            response,
            headers=self.headers,
            formdata={
            'task':'save',
            'redirect':'http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile',
            'username': username, 
            'password': password
        },
            callback=self.after_login,
            dont_filter = True,
        )

    def after_login(self, response):
        if "authentication failed" in response.body:
            self.log("Login unsuccessful")
        else:
            self.log(":Login Successfully")
            self.initialized()
            return Request(url='http://subscriber.zoominfo.com/zoominfo/', callback=self.parse)

    def parse(self, response):
        base_url = 'http://subscriber.zoominfo.com/zoominfo/#!search/profile/person?personId=521850874&targetid=profile'
        sel = Selector(response)
        item = TutorialItem()
        divs = sel.xpath("//div[3]//p").extract()
        item['title'] = sel.xpath("//div[3]")
        print divs
        request = Request(base_url, callback=self.parse)
        yield request

thanks anyone could give me a hint.


回答1:


#! == _escaped_fragment_

_escaped_fragment_ is called ugly url and is mostly presented to web-crawler while real users get the pretty #! version. Either way they both mean the same thing and there shouln't be any different functionally.

See google's ajax specification for this subject.



来源:https://stackoverflow.com/questions/41651100/how-to-deal-with-escaped-fragment-using-scrapy

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!