Make scrapyd overwrite files Make scrapyd overwrite files curl curl

Make scrapyd overwrite files


You can create your own feed storage. Extend the scrapy's FileFeedStorage, overwrite the open method to return a file in write mode (scrapy's FileFeedStorage returns a file in append mode).

import osfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import LinkExtractorfrom scrapy.extensions.feedexport import FileFeedStorageclass QuotesSpider(CrawlSpider):    name = 'toscrape.com'    start_urls = ['http://quotes.toscrape.com/']    rules = (        Rule(LinkExtractor(('quotes.toscrape.com/page/',)), callback='parsePage', follow=True),    )    custom_settings = {        'FEED_STORAGES': {            'file': 'myspider.MyFileFeedStorage'        },        'FEED_URI': 'file:///my/valid/file/path/out.json'    }    def parsePage(self, response):        return ({            'quote': quote.xpath('.//span[@class="text"]/text()').extract_first(),            'author': quote.xpath('.//small[@class="author"]/text()').extract_first(),         } for quote in response.xpath('//div[@class="quote"]'))class MyFileFeedStorage(FileFeedStorage):    def open(self, spider):        dirname = os.path.dirname(self.path)        if dirname and not os.path.exists(dirname):            os.makedirs(dirname)        return open(self.path, 'wb')

If you run scrapy runspider myspider.py multiple times, you will see that the output file is recreated each time (assuming your script is named myspider.py).


How are you sceduling spider using ScrapyD? via a Cron? or what?

I have 2 ideas,

1) Empty file manually before sending command to scrapyd,

echo "" > /path/to/json/my.json && curl http://localhost:6800/schedule.json

This will clear contents of my.json first and then will schedule spider.

2) inside your spider, just do

open("/path/to/json/my.json", 'w').close()