SequenceFile is not created in hadoop SequenceFile is not created in hadoop hadoop hadoop

SequenceFile is not created in hadoop


You don't need to worry about creating your own sequence files. MapReduce has an output format that does it automatically.

So, in your driver class you would use:

job.setOutputKeyClass(IntWritable.class);job.setOutputValueClass(IntWritable.class);job.setOutputFormatClass(SequenceFileOutputFormat.class);

and then in the reducer you'd write:

context.write(key, values.iterator().next());

and delete all of the setup method.

As a kind of aside, it doesn't look like you need a reducer at all. If you're not doing any calculations in the reducer and you're not doing anything with grouping (which I presume you're not), then why not just delete it? job.setOutputFormatClass(SequenceFileOutputFormat.class) will write your mapper output to sequence files.

If you do only want one output file, set

job.setNumReduceTasks(1);

And provided your final data isn't > 1 block size, you'll get the output you want.

It's worth noting that you're currently only outputting one value per key - you should ensure that you want that, and include a loop in the reducer to iterate over the values if you don't.