Hadoop Map/Reduce vs built-In Map/Reduce Hadoop Map/Reduce vs built-In Map/Reduce hadoop hadoop

Hadoop Map/Reduce vs built-In Map/Reduce


My answer is based on knowledge and experience of Hadoop MR and learning of Mongo DB MR.Lets see what are major differences and then try to define criteria for selection:Differences are:

  1. Hadoop's MR can be written in Java, while MongoDB's is in JavaScript.
  2. Hadoop's MR is capable of utilizing all cores, while MongoDB's is single threaded.
  3. Hadoop MR will not be collocated with the data, while Mongo DB's will be collocated.
  4. Hadoop MR has millions of engine/hours and can cope with many corner cases with massive size of output, data skews, etc
  5. There are higher level frameworks like Pig, Hive, Cascading built on top of the Hadoop MR engine.
  6. Hadoop MR is mainstream and a lot of community support is available.

From the above I can suggest the following criteria for selection:
Select Mongo DB MR if you need simple group by and filtering, do not expect heavy shuffling between map and reduce. In other words - something simple.

Select hadoop MR if you're going to do complicated, computationally intense MR jobs (for example some regressions calculations). Having a lot or unpredictable size of data between map and reduce also suggests Hadoop MR.

Java is a stronger language with more libraries, especially statistical. That should be taken into account.


As of MongoDB 2.4 MapReduce jobs are no longer single threaded.

Also, see the Aggregation Framework for a higher-performance, declarative way to perform aggregates and other analytical workloads in MongoDB.


Item 3 is certainly incorrect when it comes to Hadoop. Processing colocation with the data is part of the foundation of Hadoop.