Elastic search could not write all entries: May be es was overloaded Elastic search could not write all entries: May be es was overloaded elasticsearch elasticsearch

Elastic search could not write all entries: May be es was overloaded


This occurs because the bulk requests are incoming at a rate greater than elasticsearch cluster could process and the bulk request queue is full.

The default bulk queue size is 200.

You should handle ideally this on the client side :

1) by reducing the number the spark-submit commands running concurrently

2) Retry in case of rejections by tweaking the es.batch.write.retry.count and es.batch.write.retry.wait

Example:

es.batch.write.retry.wait = "60s"es.batch.write.retry.count = 6

On elasticsearch cluster side :

1) check if there are too many shards per index and try reducing it.
This blog has a good discussion on criteria for tuning the number of shards.

2) as a last resort increase the thread_pool.index.bulk.queue_size

Check this blog with an extensive discussion on bulk rejections.


The bulk queue in your ES cluster is hitting its capacity (200) . Try increasing it. See this page for how to change the bulk queue capacity.

https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html

Also check this other SO answer where OP had a very similar issue and was fixed by increasing the bulk pool size.

Rejected Execution of org.elasticsearch.transport.TransportService Error