In practice, how many machines do you need in order for Hadoop / MapReduce / Mahout to speed up very parallelizable computations? In practice, how many machines do you need in order for Hadoop / MapReduce / Mahout to speed up very parallelizable computations? hadoop hadoop

In practice, how many machines do you need in order for Hadoop / MapReduce / Mahout to speed up very parallelizable computations?


A "plain" Java program and a Hadoop-based, MapReduce-based implementation are very different beasts and are hard to compare. It's not like Hadoop parallelizes a little bit of your program; it is written in an entirely different form from top to bottom.

Hadoop has overheads: just the overhead of starting a job, and starting workers like mappers and reducers. It introduces a lot more time spent serializing/deserializing data, writing it locally, and transferring it to HDFS.

A Hadoop-based implementation will always consume more resources. So, it's something to avoid unless you can't avoid it. If you can run a non-distributed computation on one machine, the simplest practical advice is to not distribute. Save yourself the trouble.

In the case of Mahout recommenders, I can tell you that very crudely, a Hadoop job incurs 2-4x more computation than a non-distributed implementation on the same data. Obviously that depends immensely on the algorithm and algo tuning choices. But to give you a number: I wouldn't bother with a Hadoop cluster of less than 4 machines.

Obviously, if your computation can't fit on one of your machines, you have no choice but to distribute. Then the tradeoff is what kind of wall-clock time you can allow versus how much computing power you can devote. The reference to Amdahl's law is right, though it doesn't consider the significant overhead of Hadoop. For example, to parallelize N ways, you need at least N mappers/reducers, and incur N times the per-mapper/reducer overhead. There's some fixed startup/shutdown time too.


See Amdahl's Law

Amdahl's law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, under the assumption that the problem size remains the same when parallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm's operations arbitrarily quickly (while the remaining 88% of the operations are not parallelizable), Amdahl's law states that the maximum speedup of the parallelized version is 1/(1 – 0.12) = 1.136 times as fast as the non-parallelized implementation.

Picture of Equation

Without specifics it's difficult to give a more detailed answer.


I know this has already been answered, but I'll throw my hat into the ring. I can't give you a general rule of thumb. The performance increase really depends on many factors:

  1. How parallel/mutually exclusive all of the components/algorithm are/is.
  2. The size of the dataset
  3. The pre and post processing of the dataset [including the splitting/mapping, and reducing/concatinating]
  4. Network traffic

If you have a highly connected algorithm like a Bayes net, neural nets, markov, PCA, and EM then a lot of the time of the hadoop program will be getting instances processed, split, and recombined. [Assuming you have a large number of nodes per instance (more than 1 machine can handle). If you have a situation like this the network traffic will become more of an issue.

If you have an agorithm such as path finding, or simulated annealing, that is easy to seperate instances into their own map/reduce job. These types of algorithms can be very quick.