Splitting Reducer output in Hadoop Splitting Reducer output in Hadoop hadoop hadoop

Splitting Reducer output in Hadoop


I'm curious as to why you cannot just use more reducers, but I will take you at your word.

One option you can do is use MultipleOutputs and write to multiple files from one reducer. For example, say that the output file for each reducer is 1GB and you want 256MB files instead. This means you need to write 4 files per reducer rather than one file.

In your job driver, do this:

JobConf conf = ...;// You should probably pass this in as parameter rather than hardcoding 4.conf.setInt("outputs.per.reducer", 4);// This sets up the infrastructure to write multiple files per reducer.MultipleOutputs.addMultiNamedOutput(conf, "multi", YourOutputFormat.class, YourKey.class, YourValue.class);

In your reducer, do this:

@Overridepublic void configure(JobConf conf) {  numFiles = conf.getInt("outputs.per.reducer", 1);  multipleOutputs = new MultipleOutputs(conf);  // other init stuff  ...}@Overridepublic void reduce(YourKey key                   Iterator<YourValue> valuesIter,                   OutputCollector<OutKey, OutVal> ignoreThis,                   Reporter reporter) {    // Do your business logic just as you're doing currently.    OutKey outputKey = ...;    OutVal outputVal = ...;    // Now this is where it gets interesting. Hash the value to find    // which output file the data should be written to. Don't use the    // key since all the data will be written to one file if the number    // of reducers is a multiple of numFiles.    int fileIndex = (outputVal.hashCode() & Integer.MAX_VALUE) % numFiles;    // Now use multiple outputs to actually write the data.    // This will create output files named: multi_0-r-00000, multi_1-r-00000,    // multi_2-r-00000, multi_3-r-00000 for reducer 0. For reducer 1, the files    // will be multi_0-r-00001, multi_1-r-00001, multi_2-r-00001, multi_3-r-00001.    multipleOutputs.getCollector("multi", Integer.toString(fileIndex), reporter)      .collect(outputKey, outputValue);}@Overriderpublic void close() {   // You must do this!!!!   multipleOutputs.close();}

This pseudo code was written with the old mapreduce api in mind. Equivalent apis exist using the mapreduce api, though, so either way, you should be all set.


There's no property to do this. You'll need to write your own output format & record writer.