Cosmos DB Mongo API How to manage "Request Rate is Large" condition Cosmos DB Mongo API How to manage "Request Rate is Large" condition azure azure

Cosmos DB Mongo API How to manage "Request Rate is Large" condition


Requests with cosmosdb need to consume RUs. Obviously, your insert request exceeded the RU throughput and error code 16500 occurred.

Applications that exceed the provisioned request units for a collection will be throttled until the rate drops below the reserved level. When a throttle occurs, the backend will preemptively end the request with a 16500 error code - Too Many Requests. By default, API for MongoDB will automatically retry up to 10 times before returning a Too Many Requests error code.

You could find more instructions from official document.

You could follow the ways as below to try to solve the issue:

  1. Import your data in batches to reduce throughput.

  2. Add your own retry logic in your application.

  3. Increasing the reserved throughput for the collection. Of course, it increases your cost.

You could refer to this article.

Hope it helps you.


Update Answer:

It looks like your documents are not uniquely identifiable. So I think the "_id" attribute which automatically generated by Cosmos DB cannot determine which documents have been inserted and which documents have not been inserted.

I suggest you increasing throughput settings, empty the database and then bulk import the data.

Considering the cost , please refer to this document for setting the appropriate RU.

Or you could test bulk import operation locally via Cosmos DB Emulator.