I am trying to scrape a very simple web page with the help of Scrapy and it\'s xpath selectors but for some reason the selectors I have do not work in Scrapy but they do wor
Scrapy only does a GET request for the url, it is not a web browser and therefore cannot run JavaScript. Because of this Scrapy alone will not be enough to scrape through dynamic web pages.
In addition you will need something like Selenium which basically gives you an interface to several web browsers and their functionalities, one of them being the ability to run JavaScript and get client side generated HTML.
Here is a snippet of how one can go about doing this:
from Project.items import SomeItem
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
from selenium import webdriver
import time
class RandomSpider(CrawlSpider):
name = 'RandomSpider'
allowed_domains = ['random.com']
start_urls = [
'http://www.random.com'
]
rules = (
Rule(SgmlLinkExtractor(allow=('some_regex_here')), callback='parse_item', follow=True),
)
def __init__(self):
CrawlSpider.__init__(self)
# use any browser you wish
self.browser = webdriver.Firefox()
def __del__(self):
self.browser.close()
def parse_item(self, response):
item = SomeItem()
self.browser.get(response.url)
# let JavaScript Load
time.sleep(3)
# scrape dynamically generated HTML
hxs = Selector(text=self.browser.page_source)
item['some_field'] = hxs.select('some_xpath')
return item