Bad gateway 502 after small load test on fastcgi-mono-server through nginx and ServiceStack Bad gateway 502 after small load test on fastcgi-mono-server through nginx and ServiceStack nginx nginx

Bad gateway 502 after small load test on fastcgi-mono-server through nginx and ServiceStack


EDIT:I do see in the original question that there was no problems running under Linux, however, I was facing difficulties on Linux as well, under "high load" scenarioes (i.e +50 concurrent requests) so this might apply to OS X as well...

I dug a little deeper into this problem and I found a solution to my setup - I'm no longer recieving 502 Bad Gateway errors when load testing my simple hello world application. I tested everyting on Ubuntu 13.10 with a fresh compile of Mono 3.2.3 installed in /opt/mono.

When you start the mono-fastcgi-4 server with "/verbose=True /printlog=True" you will notice the following output:

Root directory: /some/path/you/definedParsed unix:/tmp/nginx-1.sockets as URI unix:/tmp/nginx-1.socketsListening on file /tmp/nginx-1.sockets with default permissionsMax connections: 1024Max requests: 1024

The important lines are "max connections" and "max requests". These basically tells how many active TCP connections and requests the mono-fastcgi server will be able to handle - in this case 1024.

My NGINX configuration read:

worker_processes 4;events {    worker_connections  1024;}

So I have 4 workers, which each can have 1024 connections. Thus NGINX happily accepts 4096 concurrent connections, which are then sent to mono-fastcgi (who only wishes to handle 1024 conns). Therefore, mono-fastcgi is "protecting it self" and stops serving requests. There are two solutions to this:

  1. Lower the amount of requests that NGINX can accept
  2. Increase your fastcgi upstream pool

1 is trivially solved by changing NGINX configuration to read something like:

worker_processes 4; # <-- or 1 hereevents {    worker_connections  256; # <--- if 1 above, then 1024 here}

However, this could verly likely mean that you're not able to max the resources on your machine.

The solution to 2. is a bit more tricky. First, mono-fastcgi must be started multiple times. For this I created the following script (inside the website that should be started):

function startFastcgi {    /opt/mono/bin/fastcgi-mono-server4 /loglevels=debug /printlog=true  /multiplex=false /applications=/:`pwd` /socket=$1 &}startFastcgi 'unix:/tmp/nginx-0.sockets'startFastcgi 'unix:/tmp/nginx-1.sockets'startFastcgi 'unix:/tmp/nginx-2.sockets'startFastcgi 'unix:/tmp/nginx-3.sockets'chmod 777 /tmp/nginx-*

Which starts 4 mono-fastcgi workers that can each accept 1024 connections. Then NGINX should be configured something like this:

upstream servercom {    server unix:/tmp/nginx-0.sockets;    server unix:/tmp/nginx-1.sockets;    server unix:/tmp/nginx-2.sockets;    server unix:/tmp/nginx-3.sockets;}server {    listen 80;    location / {        fastcgi_buffer_size 128k;        fastcgi_buffers 4 256k;        fastcgi_busy_buffers_size 256k;        fastcgi_pass servercom;        include fastcgi_params;    }}

This configures NGINX with a pool of 4 "upstream workers" which it will use in a round-robin fashion. Now, when I'm hammering my server with Boom in concurrency 200 for 1 minute, it's all good (aka no 502 at all).

I hope you can somehow apply this to your code and make stuff work :)

P.S:

You can download my Hello World ServiceStack code that I used to test here.

And you can download my full NGINX.config here.

There are some paths that needs to be adjusted though, but it should serve as a good base.