Docker container http requests limit Docker container http requests limit elasticsearch elasticsearch

Docker container http requests limit


So, it wasn't a problem with either Docker or Elastic. Just to recap, the same script throwning PUT requests at a Elasticsearch setup locally worked, but when throwning at a container with Elasticsearch failed after a few thousand documents (20k). To note that the overal number of documents was roughtly 800k.

So, what happend? When you setup somethig running on localhost and make a request to it (in this case a PUT request) that request goes through the loopback interface. In pratice ths means that no TCP connection gets created making a lot faster.

When the docker container was setup, ports were bound to the host. Although the script still makes requests to localhost on the desired port, a TCP connection gets created between the host and the docker container through the docker0 interface. This comes at the expense of 2 things:

  • the time to setup a TCP connection
  • TIME_WAIT state

This is actually a more realistic scenario. We setup Elasticsearch on another machine and did the exact same test and got, as expected, the same result.

The problem was that we were sending to requests and for each of them creating a new connection. Due to the way TCP works, connections cannot be closed immediately. Which meant that we were using all available connections until we got none to use because the rate of creation was higher the actual close rate.

Three suggestions to fix this:

  1. Pause requests every once in a while. Maybe put a sleep at every X requests making possible for the TIME_WAIT to pass and the connection closing
  2. Send the the Connection: close header: option for the sender to signal that the connection will be closed after completion of the response.
  3. Reuse connection(s).

I ended up going with option 3) and rewrote my collegue's script and reusing the same TCP connection.