Exception while connecting to mongodb in spark Exception while connecting to mongodb in spark hadoop hadoop

Exception while connecting to mongodb in spark


I think I've found the issue: mongodb-hadoop has a "static" modifier on its BSON encoder/decoder instances in core/src/main/java/com/mongodb/hadoop/input/MongoInputSplit.java. When Spark runs in multithreaded mode all the threads try and deserialise using the same encoder/decoder instances, which predicatbly has bad results.

Patch on my github here(have submitted a pull request upstream)

I'm now able to run an 8 core multithreaded Spark->mongo collection count() from Python!


I found the same problem. As a workaround I abandoned the newAPIHadoopRDD way, and implemented a parallel load mechanism based on defining intervals on the document id, and then loading each partition in parallel. The idea is implementing the following mongo shell code by using the MongoDB Java driver:

// Compute min and max id of the collectiondb.coll.find({},{_id:1}).sort({_id: 1}).limit(1)   .forEach(function(doc) {min_id = doc._id})db.coll.find({},{_id:1}).sort({_id: -1}).limit(1)   .forEach(function(doc) {max_id = doc._id})// Compute id rangescurr_id = min_idranges = []page_size = 1000// to avoid the use of Comparable in the Java translationwhile(! curr_id.equals(max_id)) {    prev_id = curr_id        db.coll.find({_id : {$gte : curr_id}}, {_id : 1})           .sort({_id: 1})           .limit(page_size + 1)           .forEach(function(doc) {                       curr_id = doc._id                   })    ranges.push([prev_id, curr_id])}

Now we can use the ranges to perform fast queries for collection fragments. Note the last fragment needs to be treated differently, as just a min constraint, to avoid losing the last document of the collection.

db.coll.find({_id : {$gte : ranges[1][0], $lt : ranges[1][1]}})db.coll.find({_id : {$gte : ranges[2][0]}})

I implement this as a Java method 'LinkedList computeIdRanges(DBCollection coll, int rangeSize)' for a simple Range POJO, and then I paralellize the collection and transform it with flatMapToPair to generate an RDD similar to that returned by newAPIHadoopRDD.

List<Range> ranges = computeIdRanges(coll, DEFAULT_RANGE_SIZE);JavaRDD<Range> parallelRanges = sparkContext.parallelize(ranges, ranges.size());JavaPairRDD<Object, BSONObject> mongoRDD =    parallelRanges.flatMapToPair(     new PairFlatMapFunction<MongoDBLoader.Range, Object, BSONObject>() {       ...       BasicDBObject query = range.max.isPresent() ?           new BasicDBObject("_id", new BasicDBObject("$gte", range.min)                            .append("$lt", range.max.get()))         : new BasicDBObject("_id", new BasicDBObject("$gte", range.min));       ...

You can play with the size of the ranges and the number of slices used to parallelize, to control the granularity of parallelism.

I hope that helps,

Greetings!

Juan Rodríguez Hortalá


I had the same combination of exceptions after importing a BSON file using mongorestore.Calling db.collecion.reIndex() solved the problem for me.