mapreduce conditional probability mapreduce conditional probability hadoop hadoop

mapreduce conditional probability


The short answer to your question is that you don't communicate directly between mappers... that goes against the map-reduce pattern of computation. Instead, you need to structure your algorithm so that the key values output by your map phase can be consumed and aggregated by your reducer phase in an intelligent way.

From your background information in the question it is clear that you understand that to calculate the conditional probabilities that you are interested is really just an exercising in counting. The usual pattern here is to do all the counting in one map reduce pass and then take those output and divide the appropriate quantities afterwards (trying to work them into the map-reduce pass adds needless complexity)

You really only need a data structure to keep track of the things you are trying to count. You could do this with a set of arrays with implicit indexing if speed is a must, but it is simple to do the exposition in terms of a single hashmap. Because we aren't interested

mapper code in python for hadoop streaming

import sysoutput={}for line in sys.stdin:   temp=line.strip().split('\t')   # we should sort the input so that all occurrences of items x and y appear with x before y   temp.sort()   # count the occurrences of all the single items   for item in temp:      if item in output:         output[item]+=1      else:         output[item]=1   #count the occurrences of each pair of items   for i in range(len(temp)):      for j in range(i+1,len(temp)):         key=temp[i]+'-'+temp[j]         if key in output:            output[key]+=1         else:            output[key]=1   #you can triple nest a loop if you want to count all of the occurrences of each 3 item pair, but be warned the number of combinations starts to get large quickly   #for 100 items as in your example there would be 160K combinations#this point of the program will be reached after all of the data has been streamed in through stdin#output the keys and values of our output dictionary with a tab separating themfor data in output.items():   print data[0]+'\t'+data[1]#end mapper code

The code for the reducer is now same as all of the word count examples that are so prolific. An example for python code with map-reduce streaming can be found here. The output of the map-reduce program will be a rows with the key describing what has been counted as well as the number of occurrences each single item as well as the key and number of occurrences of all pairs and from there you can write a program to calculate the conditional probabilities that you are interested in.