Nginx memcached with fallback to remote service Nginx memcached with fallback to remote service nginx nginx

Nginx memcached with fallback to remote service


My guesses on what's going on with your configuration

1. 499 codes

HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)

We can easily reproduce it, just

nc -k -l 172.17.0.6 172.17.0.6:11211

and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs

2. upstream server temporarily disabled while connecting to upstream

It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.

Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bindand use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client

3. nginx can reach memcached successfully but can't write or read from it

Nginx can only read from memcached via its built-in module (http://nginx.org/en/docs/http/ngx_http_memcached_module.html):

The ngx_http_memcached_module module is used to obtain responses from a memcached server. The key is set in the $memcached_key variable. A response should be put in memcached in advance by means external to nginx.

4. overall architecture

It's not fully clear from your question how the overall schema is supposed to work.

  • nginx's upstream uses weighted round-robin by default. That means your memcached servers will be queried once at random.You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers

  • the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool

5. what to do

I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.

6. working solution

nginx config:

https and http/1.1 is not used here but it doesn't matter

upstream http_memcached {                                                           server 127.0.0.1:11211;                                                         server 127.0.0.1:11212;                                                     }                                                                               upstream remote {                                                                   server 127.0.0.1:8080;                                                      }                                                                               server {                                                                            listen 80;                                                                      server_name server.lan;                                                         access_log /var/log/nginx/server.access.log;                                    error_log /var/log/nginx/server.error.log info;                                 location / {                                                                        set $memcached_key "$uri?$args";                                                memcached_next_upstream not_found;                                              memcached_pass http_memcached;                                                  error_page     404 = @remote;                                               }                                                                               location @remote {                                                                  internal;                                                                       access_log /var/log/nginx/server.fallback.access.log;                           proxy_pass http://remote;                                                       proxy_set_header Connection "";                                             }                                                                           }            

server.py:

this is my dummy server (python):

from random import randintfrom flask import Flaskapp = Flask(__name__)@app.route('/')def hello_world():    return 'Hello: {}\n'.format(randint(1, 100000))

This is how to run it (just need to install flask first)

FLASK_APP=server.py [flask][2] run -p 8080    

filling in my first memcached server:

$ telnet 127.0.0.1 11211Trying 127.0.0.1...Connected to 127.0.0.1.Escape character is '^]'.set /? 0 900 5cacheSTOREDquitConnection closed by foreign host.

checking:

note that we get a result every time although we stored data only in the first server

$ curl http://server.lan && echo                                  cache$ curl http://server.lan  && echo                                 cache$ curl http://server.lan && echo                                  cache

this one is not in the cache so we'll get a response from server.py

$ curl http://server.lan/?q=1 && echoHello: 32337

whole picture:

the 2 windows on the right are

memcached -p 11211 -U o -vv

and

memcached -p 11212 -U o -vv

enter image description here