How to maximize efficiency in this complex data transfer scenario How to maximize efficiency in this complex data transfer scenario mongodb mongodb

How to maximize efficiency in this complex data transfer scenario


First potential problem is easy to overcome. Calculate a hash of the trip and store them in mongo. Put the key on that field and then compare every next document if the same hash exists. This way checking for duplicate will be extremely easy and really fast. Keep in mind that this document should not have something like time of sending in it.

Second problem: 17,500,000/day is 196/second nontheless sound scary but in reality it is not so much for decent server and for sure is not a problem for Mongodb.

It is hard to tell how to make it more efficient and I highly doubt you should think about it now. Give it a try, do something, check what is not working efficiently and come back with specific questions.

P.S. not to answer all this in the comments. You have to understand that the question is extremely vague. No one knows what do you mean by trip document and how big is it. It can be 1kb, It may be 10Mb, it can be 100Mb (which is bigger then 16 Mb mongodb limit). No one knows. When I told that 196 documents/sec is not a problem, I did not said that exactly this amount of documents is the maximum cap, so even if it will be 2, 3 times more it is still sounds feasible.

You have to try it yourself. Take avarage amazon instance and see how many YOUR documents (create documents which are close to your size and structure) it can save per second. If it can not handle it, try to see how much it can, or can amazon big instance handle it.

I gave you a rough estimate that this is possible, and I have no idea that you want to "include admins using MongoDB, to update, select,". Have you told this in your question?