Catching disconnects with Apache Load Balancer and node/io.socket backend Catching disconnects with Apache Load Balancer and node/io.socket backend apache apache

Catching disconnects with Apache Load Balancer and node/io.socket backend


So after some hit and trials, I was able to get a config which works fine. The changes required

Base Path on Server

You need to use the base path on server as well to make this smooth

var io = require('socket.io')(server, { path: '/test/socket.io'});

And then below is the updated Apache config I used

<VirtualHost *:8090>    # Admin email, Server Name (domain name), and any aliases    ServerAdmin webmaster@example.com    ProxyRequests off   #Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED   Header add Set-Cookie "SERVERID=sticky.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED    <Proxy "balancer://mycluster">        BalancerMember "http://127.0.0.1:3001" route=1 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900        BalancerMember "http://127.0.0.1:3000" route=2 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900        ProxySet stickysession=SERVERID    </Proxy>    <Proxy "balancer://myws">        BalancerMember "ws://127.0.0.1:3001" route=1 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900        BalancerMember "ws://127.0.0.1:3000" route=2 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900        ProxySet stickysession=SERVERID    </Proxy>    RewriteEngine On    RewriteCond %{HTTP:Upgrade} =websocket [NC]    RewriteRule /(.*) balancer://myws/$1 [P,L]    RewriteCond %{HTTP:Upgrade} !=websocket [NC]    RewriteRule /(.*)                balancer://mycluster/$1 [P,L]    ProxyTimeout 3</VirtualHost>

And now the disconnects are immediate

Disconnect


Im not that familiar with Apaches mod_proxy, but I think your issue is related to your paths.

I setup a little test to see if I could help (and have a play), in my test I will proxy both HTTP and ws traffic to a single backend. Which is what your doing plus websockets.

Servers (LXD Containers):

  • 10.158.250.99 is the proxy.
  • 10.158.250.137 is the node.

First, enable the apache mods on the proxy:

sudo a2enmod proxysudo a2enmod proxy_httpsudo a2enmod proxy_wstunnelsudo a2enmod proxy_balancersudo a2enmod lbmethod_byrequests

Then change 000-default.conf:

sudo nano /etc/apache2/sites-available/000-default.conf

This is what I used after clearing out the comments:

<VirtualHost *:80>    ServerAdmin webmaster@localhost    DocumentRoot /var/www/html    ErrorLog ${APACHE_LOG_DIR}/error.log    CustomLog ${APACHE_LOG_DIR}/access.log combined    <Proxy balancer://mycluster>        BalancerMember http://10.158.250.137:7779    </Proxy>     ProxyPreserveHost On    # web proxy - forwards to mycluser nodes    ProxyPass /test/ balancer://mycluster/    ProxyPassReverse /test/ balancer://mycluster/    # ws proxy - forwards to web socket server    ProxyPass /ws/  "ws://10.158.250.137:7778"</VirtualHost>

What the above config is doing:

  • Visit the proxy http://10.158.250.99 it will show default Apache page.
  • Visit the proxy http://10.158.250.99/test/ it will forward the HTTP request to http://10.158.250.137:7779.
  • Visit the proxy http://10.158.250.99/ws and it will make a websocket connection to ws://10.158.250.137:7778 and tunnel it though.

So for my app im using phptty as it uses HTTP and WS, its uses xterm.js frontend which connects to websocket http://10.158.250.99/ws to give a tty in the browser.

Here Is a screen of it all working, using my LXDui electron app to control it all.

enter image description here

So check your settings against what I have tried and see if it's different, its always good to experiment abit to see how things work before trying to apply them to your idea.

Hope it helps.


I think your delay problem to detect the client has closed the page comes from your default kernel tcp keepalive configuration of your proxy apache node.I think in your system, if you check the value of sys.net.ipv4.tcp_keepalive_time, you may have the value 60 that should be the 60 seconds waited before the first keepalive packet is sent to detect if the client has closed the connection.From your problem details, mod_proxy looks to have an issue because it seems to not forward the RST packet that you correctly manage without the mod_proxy module.Without solving that forward RST packet issue on mod_proxy, you may only be able to reduce the delay by decreasing the parameter tcp_keepalive_time in example to 5, to wait up to 5 second before to start to check if the tcp connection is closed. Check also the number of failed keepalive probes parameters before to state the connection has been closed, it could also impact the total delay. This is tcp_keepalive_probes parameter.