Permutations with MapReduce Permutations with MapReduce hadoop hadoop

Permutations with MapReduce


Since a file will have n inputs, the permutations should have n^2 outputs. It makes sense that you could have n tasks perform n of those operations. I believe you could do this (assuming only for one file):

Put your input file into the DistributedCache to be accessible as read-only to your Mapper/Reducers. Make an input split on each line of the file (like in WordCount). The mapper will thus recieve one line (e.g. title1 in your example). Then read the lines out of the file in the DistributedCache and emit your key/value pairs: with the key as your input and the values as each line from the file from DistributedCache.

In this model, you should only need a Map step.

Something like:

  public static class PermuteMapper       extends Mapper<Object, Text, Text, Text>{    private static final IN_FILENAME="file.txt";    public void map(Object key, Text value, Context context                    ) throws IOException, InterruptedException {      String inputLine = value.toString();      // set the property mapred.cache.files in your      // configuration for the file to be available      Path[] cachedPaths = DistributedCache.getLocalCacheArchives(conf);      if ( cachedPaths[0].getName().equals(IN_FILENAME) ) {         // function defined elsewhere         String[] cachedLines = getLinesFromPath(cachedPaths[0]);         for (String line : cachedLines)           context.emit(inputLine, line);      }    }  }