Data Compression Algorithms Data Compression Algorithms c c

Data Compression Algorithms


There are a ton of compression algorithms out there. What you need here is a lossless compression algorithm. A lossless compression algorithm compresses data such that it can be decompressed to achieve exactly what was given before compression. The opposite would be a lossy compression algorithm. Lossy compression can remove data from a file. PNG images use lossless compression while JPEG images can and often do use lossy compression.

Some of the most widely known compression algorithms include:

ZIP archives use a combination of Huffman coding and LZ77 to give fast compression and decompression times and reasonably good compression ratios.

LZ77 is pretty much a generalized form of RLE and it will often yield much better results.

Huffman allows the most repeating bytes to represent the least number of bits.Imagine a text file that looked like this:

aaaaaaaabbbbbcccdd

A typical implementation of Huffman would result in the following map:

Bits Character   0         a  10         b 110         c1110         d

So the file would be compressed to this:

00000000 10101010 10110110 11011101 11000000                                       ^^^^^                              Padding bits required

18 bytes go down to 5. Of course, the table must be included in the file. This algorithm works better with more data :P

Alex Allain has a nice article on the Huffman Compression Algorithm in case the Wiki doesn't suffice.

Feel free to ask for more information. This topic is pretty darn wide.


My paper A Survey Of Architectural Approaches for Data Compression in Cache and Main Memory Systems (permalink here) reviews many compression algorithms and also techniques for using them in modern processors. It reviews both research-grade and commercial-grade compression algorithms/techniques, so you may find one which has not yet been implemented in ASIC.


Here are some lossless algorithms (can perfectly recover the original data using these):

  • Huffman code
  • LZ78 (and LZW variation)
  • LZ77
  • Arithmetic coding
  • Sequitur
  • prediction with partial match (ppm)

Many of the well known formats like png or gif use variants or combinations of these.

On the other hand there are lossy algorithms too (compromise accuracy to compress your data, but often works pretty well). State of the art lossy techniques combine ideas from differential coding, quantization, and DCT, among others.

To learn more about data compression, I recommend https://www.elsevier.com/books/introduction-to-data-compression/sayood/978-0-12-809474-7. It is a very accessible introduction text. The 3rd edition out there in pdf online.