Scalable Image Storage Scalable Image Storage hadoop hadoop

Scalable Image Storage


We have been using CouchDB for that, saving images as an "Attachment". But after a year the multi-dozen GB CouchDB Database files turned out to be a headache. For example CouchDB replication still has issues if you use it with very large document sizes.

So we just rewrote our software to use CouchDB for image information and Amazon S3 for the actual image storage. The code is available at http://github.com/hudora/huImages

You might want to set up a Amazon S3 compatible Storage Service on-site for your project. This keeps you flexible and leaves the amazon option without requiring external services for now. Walruss seems to become the most popular and scalable S3 clone.

I also urge you to look into the Design of Livejournal with their excellent Open Source MogileFS and Perlbal offerings. This combination is probably the most Famous image serving setup.

Also the flickr Architecture can be an inspiration, although they don't offer Open Source software to the public, like Livejournal does.


"Additional question: CouchDB does save blobs via Base64."

CouchDB does not save blobs as Base64, they are stored as straight binary. When retrieving a JSON document with ?attachments=true we do convert the on-disk binary to Base64 in order to add it safely to JSON but that's just a presentation level thing.

See Standalone Attachments.

CouchDB serves attachments with the content-type they are stored with, it's possible, in fact common, to server HTML, CSS and GIF/PNG/JPEG attachments directly to browsers.

Attachments can be streamed and, in CouchDB 1.1, even support the Range header (for media streaming and/or resumption of an interrupted download).


Use Seaweed-FS (used to be called Weed-FS), an implementation of Facebook's haystack paper.

Seaweed-FS is very flexible and pared down to the basics. It was created to store billions of images and serve them fast.