Native snappy library not available: this version of libhadoop was built without snappy support Native snappy library not available: this version of libhadoop was built without snappy support hadoop hadoop

Native snappy library not available: this version of libhadoop was built without snappy support


  1. One approach was to use a different hadoop codec like below sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress", "true") sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.type", CompressionType.BLOCK.toString) sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec") sc.hadoopConfiguration.set("mapreduce.map.output.compress", "true") sc.hadoopConfiguration.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec")

  2. Second approach was to mention --driver-library-path /usr/hdp/<whatever is your current version>/hadoop/lib/native/ as a parameter to my spark-submit job (in command line)