Validation Failed: 1: no requests added in bulk indexing ElasticSearch Validation Failed: 1: no requests added in bulk indexing ElasticSearch elasticsearch elasticsearch

Validation Failed: 1: no requests added in bulk indexing ElasticSearch


The bulk API of Elasticsearch use a special syntax, which is actually made of json documents written on single lines. Take a look to the documentation.

The syntax is pretty simple. For indexing, creating and updating you need 2 single line json documents. The first lines tells the action, the second gives the document to index/create/update. To delete a document, it is only needed the action line. For example (from the documentation):

{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }{ "field1" : "value1" }{ "create" : { "_index" : "test", "_type" : "type1", "_id" : "3" } }{ "field1" : "value3" }{ "update" : {"_id" : "1", "_type" : "type1", "_index" : "index1"} }   { "doc" : {"field2" : "value2"} }{ "delete" : { "_index" : "test", "_type" : "type1", "_id" : "2" } }

Don't forget to end your file with a new line.Then, to call the bulk api use the command:

curl -s -XPOST localhost:9200/_bulk --data-binary "@requests"

From the documentation:

If you’re providing text file input to curl, you must use the --data-binary flag instead of plain -d


I had a similar issue in that I wanted to delete a specific document of a specific type and via the above answer I managed to get my simple bash script working finally!

I have a file that has a document_id per line (document_id.txt) and using the below bash script I can delete documents of a certain type with the mentioned document_id's.

This is what the file looks like:

c476ce18803d7ed3708f6340fdfa34525b20ee905131a30a6316f221fe420d2d3c0017a76643bccd08ebca52025ad1c81581a018febbe57b1e3ca3cd496ff829c736aa311e2e749cec0df49b5a37f79687c4101cb10d3404028f83af1ce470a58744b75c37f0daf7be27cf081e491dd445558719e4dedba1

The bash script looks like this:

#!/bin/bashes_cluster="http://localhost:9200"index="some-index"doc_type="some-document-type"for doc_id in `cat document_id.txt`do    request_string="{\"delete\" : { \"_type\" : \"${doc_type}\", \"_id\" : \"${doc_id}\" } }"    echo -e "${request_string}\r\n\r\n" | curl -s -XPOST "${es_cluster}/${index}/${doc_type}/_bulk" --data-binary @-    echodone

The trick, after lots of frustration, was to use the -e option to echo and append \n\n to the output of echo before I piped it into curl.

And then in curl I then have the --data-binary option set to stop it stripping out the \n\n needed for the _bulk endpoint followed by the @- option to get it to read from stdin!


Was a weird mistake in my case. I was creating the bulkRequest Object and clearing it before inserting into ElasticSearch.

The line that was creating the issue.

bulkRequest.requests().clear();