Cached Spark RDD ( read from Sequence File) has invalid entries, how do i fix this? Cached Spark RDD ( read from Sequence File) has invalid entries, how do i fix this? hadoop hadoop

Cached Spark RDD ( read from Sequence File) has invalid entries, how do i fix this?


Please refer to the comments in sequenceFile.

/** Get an RDD for a Hadoop SequenceFile with given key and value types. * * '''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each * record, directly caching the returned RDD or directly passing it to an aggregation or shuffle * operation will create many references to the same object. * If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first * copy them using a `map` function. */


Below piece of code worked for me.... Instead of getbytes i used copybytes

val response = sc.sequenceFile(inputPathConcat, classOf[Text], classOf[BytesWritable])  .map(x => (org.apache.hadoop.io.Text.decode(x._2.copyBytes())))