Node.js + Express: app won't start listening on port 80 Node.js + Express: app won't start listening on port 80 express express

Node.js + Express: app won't start listening on port 80


If you really want to do this you can forward traffic on port 80 to 3000.

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000


Are you starting your app as root? Because lower port numbers require root privileges.Maybe a sudo node app.js works?

BUT, you should NOT run any node.js app on port 80 with root privileges!!! NEVER!

My suggestions is to run nginx in front as a reverse proxy to your node.js app running on port e.g. 3000


Keep it Stupid Simple:

  • setcap
  • systemd
  • VPS

On a normal VPS (such as Digital Ocean, Linode, Vultr, or Scaleway), where the disk is persistent, use "setcap". This will allow a non-root user to bind to privileged ports.

sudo setcap 'cap_net_bind_service=+ep' $(which node)

TADA! Now you can run node ./server.js --port 80 as a normal user!

Aside:

You can also use systemd to stop and start your service. Since systemd is sometimes a p.i.t.a., I wrote a wrapper script in Go that makes it really easy to deploy node projects:

# Installcurl https://webinstall.dev/serviceman | bash
# Usecd ./my/node/projectsudo serviceman --username $(whoami) add npm start

or, if your server isn't called 'server.js' (de facto standard), or extra options:

cd ./my/node/projectsudo serviceman --username $(whoami) add node ./my-server-thing.js -- --my-options

All that does is create your systemd file for you with sane defaults. I'd recommend you check out the systemd documentation as well, but it is a bit hard to grok and there are probably more confusing and otherwise bad tutorials than there are simple and otherwise good tutorials.

Ephemeral Instances (i.e. EC2) are not for long-running servers

Generally, when people use EC2, it's because they don't care about individual instance uptime reliability - they want a "scalable" architecture, not a persistent architecture.

In most of these cases it isn't actually intended that the virtualized server persist in any sort of way. In these types of "ephemeral" (temporary) environments a "reboot" is intended to be about the same as reinstalling from scratch.

You don't "setup a server" but rather "deploy an image". The only reason you'd log into such a server is to prototype or debug the image you're creating.

The "disks" are volatile, the IP addresses are floating, the images behave the same on each and every boot. You're also not typically utilizing a concept of user accounts in the traditional sense.

Therefore: although it is true that, in general, you shouldn't run a service as root, the types of situations in which you typically use volatile virtualization... it doesn't matter that much. You have a single service, a single user account, and as soon as the instance fails or is otherwise "rebooted" (or you spin up a new instance of your image), you have a fresh system all over again (which does mean that any vulnerabilities persist).

Firewalls: Ephemeral vs VPS

Stuff like EC2 is generally intended to be private-only, not public-facing. These are "cloud service" systems. You're expected to use a dozen different services and auto-scale. As such, you'd use the load balancer service to forward ports to your EC2 group. Typically the default firewall for an instance will deny all public-network traffic. You have to go into the firewall management and make sure the ports you intend to use are actually open.

Sometimes VPS providers have "enterprise" firewall configurators, but more typically you just get raw access to the virtual machine and since only the ports that you actually listen on get access to the outside world in the first place (by default they typically don't have random services running), you may not get any additional benefit from a firewall. Certainly a good idea, but not a requirement to do what you need to do.

Don't use EC2 as a VPS

The use case you have above may be a much better candidate for a traditional VPS service (as mentioned above: Digital Ocean, Linode, Vultr, Scaleway, etc) which are far easier to use and have much less management hassle to get started. All you need is a little bash CLI know-how.

And, as an extra bonus, you don't have to guess at what the cost will be. They tell you in simple $/month rather than ยข/cpu/hour/gb/internal-network/external-network/etc - so when something goes wrong you get a warning via email or in your admin console rather than an unexpected bill for $6,527.

Bottom line: If you choose to use EC2 and you're not a "DevOps" expert with an accountant on staff... you're gonna have a hard time.