Pass Scrapy Spider a list of URLs to crawl via .txt file

試著忘記壹切 提交于 2019-11-29 23:24:24

Run your spider with -a option like:

scrapy crawl myspider -a filename=text.txt

Then read the file in the __init__ method of the spider and define start_urls:

class MySpider(BaseSpider):
    name = 'myspider'

    def __init__(self, filename=None):
        if filename:
            with open(filename, 'r') as f:
                self.start_urls = f.readlines()

Hope that helps.

you could simply read-in the .txt file:

with open('your_file.txt') as f:
    start_urls = f.readlines()

if you end up with trailing newline characters, try:

with open('your_file.txt') as f:
    start_urls = [url.strip() for url in f.readlines()]

Hope this helps

rocker_raj

If your urls are line seperated

def get_urls(filename):
        f = open(filename).read().split()
        urls = []
        for i in f:
                urls.append(i)
        return urls 

then this lines of code will give you the urls.

class MySpider(scrapy.Spider):
    name = 'nameofspider'

    def __init__(self, filename=None):
        if filename:
            with open('your_file.txt') as f:
                self.start_urls = [url.strip() for url in f.readlines()]

This will be your code. It will pick up the urls from the .txt file if they are separated by lines, like, url1 url2 etc..

After this run the command -->

scrapy crawl nameofspider -a filename=filename.txt

Lets say, your filename is 'file.txt', then, run the command -->

scrapy crawl myspider -a filename=file.txt
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!