Click a Button in Scrapy Click a Button in Scrapy python python

Click a Button in Scrapy


Scrapy cannot interpret javascript.

If you absolutely must interact with the javascript on the page, you want to be using Selenium.

If using Scrapy, the solution to the problem depends on what the button is doing.

If it's just showing content that was previously hidden, you can scrape the data without a problem, it doesn't matter that it wouldn't appear in the browser, the HTML is still there.

If it's fetching the content dynamically via AJAX when the button is pressed, the best thing to do is to view the HTTP request that goes out when you press the button using a tool like Firebug. You can then just request the data directly from that URL.

Do I have to use an external library like mechanize or lxml?

If you want to interpret javascript, yes you need to use a different library, although neither of those two fit the bill. Neither of them know anything about javascript. Selenium is the way to go.

If you can give the URL of the page you're working on scraping I can take a look.


Selenium browser provide very nice solution. Here is an example (pip install -U selenium):

from selenium import webdriverclass northshoreSpider(Spider):    name = 'xxx'    allowed_domains = ['www.example.org']    start_urls = ['https://www.example.org']    def __init__(self):        self.driver = webdriver.Firefox()    def parse(self,response):            self.driver.get('https://www.example.org/abc')            while True:                try:                    next = self.driver.find_element_by_xpath('//*[@id="BTN_NEXT"]')                    url = 'http://www.example.org/abcd'                    yield Request(url,callback=self.parse2)                    next.click()                except:                    break            self.driver.close()    def parse2(self,response):        print 'you are here!'


To properly and fully use JavaScript you need a full browser engine and this is possible only with Watir/WatiN/Selenium etc.