How to get the pipeline object in Scrapy spider

情到浓时终转凉″ 提交于 2019-12-19 08:14:20

问题


I have use the mongodb to store the data of the crawl.

Now I want to query the last date of the data, that I can continue crawl the data and not need restart it from the begin of the url list.(url, which can determined by the date, like: /2014-03-22.html)

I want only one connection object to take the database operation, which is in pipeline.

So, I want to know how can I get the pipeline object(not new one) in the spider.

Or, any better solution for incremental update...

Thanks in advance.

Sorry, for my poor english... Just sample now:

# This is my Pipline
class MongoDBPipeline(object):
    def __init__(self, mongodb_db=None, mongodb_collection=None):
        self.connection = pymongo.Connection(settings['MONGODB_SERVER'], settings['MONGODB_PORT'])
        ....
    def process_item(self, item, spider):
        ....
    def get_date(self):
        ....

And the spider:

class Spider(Spider):
    name = "test"
    ....

    def parse(self, response):
        # Want to get the Pipeline object
        mongo = MongoDBPipeline() # if take this way, must a new Pipeline object
        mongo.get_date()          # In scrapy, it must have a Pipeline object for the spider
                                  # I want to get the Pipeline object, which created when scrapy started.

Ok, just don't want to new a new object....I admit I am an OCD..


回答1:


A Scrapy Pipeline has an open_spider method that gets executed after the spider is initialized. You can pass a reference to the database connection, the get_date() method, or the Pipeline itself, to your spider. An example of the latter with your code is:

# This is my Pipline
class MongoDBPipeline(object):
    def __init__(self, mongodb_db=None, mongodb_collection=None):
        self.connection = pymongo.Connection(settings['MONGODB_SERVER'], settings['MONGODB_PORT'])
        ....

    def process_item(self, item, spider):
        ....
    def get_date(self):
        ....

    def open_spider(self, spider):
        spider.myPipeline = self

Then, in the spider:

class Spider(Spider):
    name = "test"

    def __init__(self):
        self.myPipeline = None

    def parse(self, response):
        self.myPipeline.get_date()

I don't think the __init__() method is necessary here, but I put it here to show that open_spider replaces it after initialization.




回答2:


According to the scrapy Architecture Overview:

The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders.

Basically that means that, first, scrapy spiders are working, then extracted items are going to the pipelines - no way to go backwards.

One possible solution would be, in the pipeline itself, check if the Item you've scraped is already in the database.

Another workaround would be to keep the list of urls you've crawled in the database, and, in the spider, check if you've already got the data from a url.

Since I'm not sure what do you mean by "start from the beginning" - I cannot suggest anything specific.

Hope at least this information helped.



来源:https://stackoverflow.com/questions/23105590/how-to-get-the-pipeline-object-in-scrapy-spider

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!