Scaling Nginx, PHP-FPM and MongoDB Scaling Nginx, PHP-FPM and MongoDB nginx nginx

Scaling Nginx, PHP-FPM and MongoDB


It seems all it took was a little bit of calculations. Since I have 8 cores available, I can generate more nginx worker processes:

nginx.conf

worker_processes 4;events {    worker_connections 1024;}

And 16gb of ram will give some leg room for a static amount of php-fpm workers.

php-fpm.conf

pm = staticpm.max_children = 4096

The Nginx fastcgi settings stayed the same. I probably have a bit more tweaking todo to as as settings changed, the acceptable concurrency stayed the same while the server load went down, but this seems to do the trick and is at least a starting point.

A single server seems to handle about 2000 concurrency before the load gets pretty high. ApacheBench starts getting errors around 500 concurrency so testing with AB should be done from multiple servers.

As David said, ideally this would be written in something that could scale easier, but given the time frame that just isn't feasible at this point.

I hope this helps others.


MongoDB is not the bottleneck here. If you need 1200+ concurrent connections, PHP-FPM (and PHP in general) may not the tool for the job. Actually, scratch that. It's NOT the right tool for the job. Many benchmarks assert that after 200-500 concurrent connections, nginx/PHP-FPM starts to falter (see here).

I was in a similar situation last year and instead of trying to scale the unscalable, I rewrote the application in Java using Kilim (a project which I've also contributed to). Another great choice is writing it in Erlang (which is what Facebook uses). I strongly suggest you re-evaluate your choice of language here and refactor before it's too late.

Suppose you get PHP-FPM working "okay" with 1200 maybe even 1500 concurrent connections. What about 2000? 5000? 10000? Absolutely, unequivocally, indubitably impossible.