Java heap space problem with Elastic Search
allocation
A good rule-of-thumb is to ensure you keep the number of shards per node below 20 to 25 per GB heap it has configuredExample: A node with a 30GB heap should therefore have a maximum of 600-750 shardsShards should be no larger than 50GB. 25GB is what we target large shards.Keep shard size less than 40% of data node size.
allocation pr. node
curl localhost:9200/_cat/allocation?v
https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster
lock the process address space into RAM and avoid swapping
Add this line to config/elasticsearch.yml
bootstrap.memory_lock: true
https://www.elastic.co/guide/en/elasticsearch/reference/current/_memory_lock_check.html