Hadoop gzip compressed files Hadoop gzip compressed files hadoop hadoop

Hadoop gzip compressed files


A file compressed with the GZIP codec cannot be split because of the way this codec works.A single SPLIT in Hadoop can only be processed by a single mapper; so a single GZIP file can only be processed by a single Mapper.

There are atleast three ways of going around that limitation:

  1. As a preprocessing step: Uncompress the file and recompress using a splittable codec (LZO)
  2. As a preprocessing step: Uncompress the file, split into smaller sets and recompress. (See this)
  3. Use this patch for Hadoop (which I wrote) that allows for a way around this: Splittable Gzip

HTH


This is one of the biggest miss understanding in HDFS.

Yes files compressed as a gzip file are not splitable by MapReduce, but that does not mean that GZip as a codec has no value in HDFS and cannot be made splitable.

GZip as a Codec can be used with RCFiles, Sequence Files, Arvo Files, and many more file formats. When the Gzip Codec is used within these splitable formats you get the great compression and pretty good speed from Gzip plus the splitable component.


GZIP files cannot be partitioned in any way, due to a limitation of the codec. 6.7GB really isn't that big, so just decompress it on a single machine (it will take less than an hour) and copy the XML up to HDFS. Then you can process the Wikipedia XML in Hadoop.

Cloud9 contains a WikipediaPageInputFormat class that you can use to read the XML in Hadoop.