hadoop stream, how to set partition? hadoop stream, how to set partition? hadoop hadoop

hadoop stream, how to set partition?


Thanks to ryanbwork I've been able to solve this problem. Yay !

The right idea was indeed to create a key that consists of a concatenation of the values. To go a little further, it is also possible to create a key that looks like

<'1.0.foo.bar', {'0','foo','bar'}><'1.1.888.999', {'1','888','999'}>

Options can then be passed to hadoop so that it can partition by the first "part" of the key. If I'm not mistaking in the interpretation, it looks like

-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartioner-D stream.map.output.field.separator=. # I added some "." in the key-D stream.num.map.output.key.fields=4  # 4 "sub-fields" are used to sort-D num.key.fields.for.partition=1      # only one field is used to partition

This solution, based on what ryanbwork said, allows to create more reducers, while ensuring the data is properly splitted, and sorted.


After reading this post I'd propose modifying your mapper such that it returns pairs whose 'keys' include your key value, your linetype value, and the value1/value2 values all concatenated together. You'd keep the 'value' part of the pair the same. So for example, you'd return the following pairs to represent your first two examples:

<'10foobar',{'0','foo','bar'}><'11888999',{'1','888','999'}>

Now if you were to utilize a single reducer, all of your records would be get sent to the same reduce task and sorted in alphabetical order based on their 'key'. This would fulfill your requirement that pairs get sorted by key, then by linetype, then by value1 and finally value2, and you could access these values individually in the 'value' portion of the pair. I'm not very familiar with the different built in partioner/sort classes, but I'd assume you could just use the defaults and get this to work.