elasticsearch / kibana errors "Data too large, data for [@timestamp] would be larger than limit elasticsearch / kibana errors "Data too large, data for [@timestamp] would be larger than limit elasticsearch elasticsearch

elasticsearch / kibana errors "Data too large, data for [@timestamp] would be larger than limit


Clearing the cache alleviates the symptoms for now.

http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-clearcache.html

Clear a single index

curl -XPOST 'http://localhost:9200/twitter/_cache/clear'

Clear multiple indicies

curl -XPOST 'http://localhost:9200/kimchy,elasticsearch/_cache/clear'curl -XPOST 'http://localhost:9200/_cache/clear'

Or as suggested by a user in IRC. This one seems to work the best.

curl -XPOST 'http://localhost:9200/_cache/clear' -d '{ "fielddata": "true" }'

Update: these errors went away as soon as the cluster was moved to a faster hypervisor


The problem is the memory given to the Elasticsearch by ES_JAVA_OPTS.

Try to give more memory with: ES_JAVA_OPTS="-Xmx2g -Xms2g".


The _cache API also supports targeted clearing to the threadpools for fielddata, request, etc. through just normal URI query parameters.

Prototype, see the placeholder <cache type>:

$ curl -XPOST \    -H "Content-Type: application/json" \    'http://localhost:9200/_cache/clear?<cache type>=true'

Examples

$ curl -XPOST \    -H "Content-Type: application/json" \    'http://localhost:9200/_cache/clear?fielddata=true'
$ curl -XPOST \    -H "Content-Type: application/json" \    'http://localhost:9200/_cache/clear?request=true'

NOTE: You can also target specific indexes, substitute below where <index>:

$ curl -XPOST \    -H "Content-Type: application/json" \    'http://localhost:9200/<index>/_cache/clear?request=true'

References