Is there a way to reduce Scrapy's memory consumption? Is there a way to reduce Scrapy's memory consumption? python-3.x python-3.x

Is there a way to reduce Scrapy's memory consumption?


Don't store any intermediate data. Check if the code going through any infinite loops.

for storing URL's, use any queuing broker like RabbitMq or Redis.

for final data, store in any DB using the python db connection library (sqlalchemy,mysqlconnecter,pyodc etc depending on the db selected)

This can help your code to be run distributed and effienct (remember to use NUllpool or singlepool to avoid too many db connections)

For easy and efficient way is using a sqlite dbinsert 1 million in a table with status as done or notyetafter Crawling and storing the URL data into another data table update the URL table to "done" from "notyet"This helps to keep track of the URLs scraped so for and can restart the script in case any issues and scrape only the not done date.