Is it possible to read MongoDB data, process it with Hadoop, and output it into a RDBS (MySQL)? Is it possible to read MongoDB data, process it with Hadoop, and output it into a RDBS (MySQL)? mongodb mongodb

Is it possible to read MongoDB data, process it with Hadoop, and output it into a RDBS (MySQL)?


TL;DR: Set an an output formatter that writes to a RDBS in your Hadoop job:

 job.setOutputFormatClass( DBOutputFormat.class );

Several things to note:

  1. Exporting data from MongoDB to Hadoop using Sqoop is not possible. This is because Sqoop uses JDBC which provides a call-level API for SQL-based database, but MongoDB is not an SQL-based database. You can look at the «MongoDB Connector for Hadoop» to do this job. The connector is available on GitHub. (Edit: as you point out in your update.)

  2. Sqoop exports are not made in a single transaction by default. Instead, according to the Sqoop docs:

    Since Sqoop breaks down export process into multiple transactions, it is possible that a failed export job may result in partial data being committed to the database. This can further lead to subsequent jobs failing due to insert collisions in some cases, or lead to duplicated data in others. You can overcome this problem by specifying a staging table via the --staging-table option which acts as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction.

  3. The «MongoDB Connector for Hadoop» does not seem to force the workflow you describe. According to the docs:

    This connectivity takes the form of allowing both reading MongoDB data into Hadoop (for use in MapReduce jobs as well as other components of the Hadoop ecosystem), as well as writing the results of Hadoop jobs out to MongoDB.

  4. Indeed, as far as I understand from the «MongoDB Connector for Hadoop»: examples, it would be possible to specify a org.apache.hadoop.mapred.lib.db.DBOutputFormat into your Hadoop MapReduce job to write the output to a MySQL database. Following the example from the connector repository:

    job.setMapperClass( TokenizerMapper.class );job.setCombinerClass( IntSumReducer.class );job.setReducerClass( IntSumReducer.class );job.setOutputKeyClass( Text.class );job.setOutputValueClass( IntWritable.class );job.setInputFormatClass( MongoInputFormat.class );/* Instead of: * job.setOutputFormatClass( MongoOutputFormat.class ); * we use an OutputFormatClass that writes the job results  * to a MySQL database. Beware that the following OutputFormat  * will only write the *key* to the database, but the principle * remains the same for all output formatters */job.setOutputFormatClass( DBOutputFormat.class );


I would recommend you take a look at Apache Pig (which runs on top of Hadoop's map-reduce). It will output to MySql (no need to use Scoop). I used it to do what you are describing. It is possible to do an "upsert" with Pig and MySql. You can use Pig's STORE command with piggyBank's DBStorage and MySql's INSERT DUPLICATE KEY UPDATE (http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html).


Use MongoHadoop connector to read data from MongoDB and process it using Hadoop.

Link:https://github.com/mongodb/mongo-hadoop/blob/master/hive/README.md

Using this connector you can use Pig and Hive to read data from Mongo db and process it using Hadoop.

Example of Mongo Hive table:

  CREATE EXTERNAL TABLE TestMongoHiveTable    (     id STRING,    Name STRING    )    STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'    WITH SERDEPROPERTIES('mongo.columns.mapping'='{"id":"_id","Name":"Name"}')    LOCATION '/tmp/test/TestMongoHiveTable/'     TBLPROPERTIES('mongo.uri'='mongodb://{MONGO_DB_IP}/userDetails.json');

Once it is exported to hive table you can use Sqoop or Pig to export data to mysql.

Here is a flow.

Mongo DB -> Process data using Mongo DB hadoop connector (Pig) -> Store it to hive table/HDFS -> Export data to mysql using sqoop.