How a compressed file is read during decompression? How a compressed file is read during decompression? hadoop hadoop

How a compressed file is read during decompression?


No, the 5 GB does not need to be read into memory. You can read in a byte at a time if you like, and decompress it that way. gzip, bzip2, and all compression formats that I am aware of are streaming formats. You can read in small bits and decompress them serially, never having to go backwards in the file. (The .ZIP format has header information at the end, so unzippers usually seek backwards from there to the entries. However that is not required, and a .ZIP file can be both compressed and decompressed as a stream.)


gzipped files are not splittable, which means there will be always only 1 mapper reading the file in mapreduce, so the best practices is unzipped it first before putting it on HDFS. bzipped files splittable and they are better fit for Hadoop than gzipped files.