Fastest way to download 3 million objects from a S3 bucket Fastest way to download 3 million objects from a S3 bucket python python

Fastest way to download 3 million objects from a S3 bucket


Okay, I figured out a solution based on @Matt Billenstien's hint. It uses eventlet library. The first step is most important here (monkey patching of standard IO libraries).

Run this script in the background with nohup and you're all set.

from eventlet import *patcher.monkey_patch(all=True)import os, sys, timefrom boto.s3.connection import S3Connectionfrom boto.s3.bucket import Bucketimport logginglogging.basicConfig(filename="s3_download.log", level=logging.INFO)def download_file(key_name):    # Its imp to download the key from a new connection    conn = S3Connection("KEY", "SECRET")    bucket = Bucket(connection=conn, name="BUCKET")    key = bucket.get_key(key_name)    try:        res = key.get_contents_to_filename(key.name)    except:        logging.info(key.name+":"+"FAILED")if __name__ == "__main__":    conn = S3Connection("KEY", "SECRET")    bucket = Bucket(connection=conn, name="BUCKET")    logging.info("Fetching bucket list")    bucket_list = bucket.list(prefix="PREFIX")    logging.info("Creating a pool")    pool = GreenPool(size=20)    logging.info("Saving files in bucket...")    for key in bucket.list():        pool.spawn_n(download_file, key.key)    pool.waitall()


Use eventlet to give you I/O parallelism, write a simple function to download one object using urllib, then use a GreenPile to map that to a list of input urls -- a pile with 50 to 100 greenlets should do...