docker swarm mode multiple services same port docker swarm mode multiple services same port docker docker

docker swarm mode multiple services same port


You can look into Docker Flow:Proxy to use as a easy-to-configure reverse proxy.

BUT, I believe, as other commentators have pointed out, the Docker 1.12 swarm mode has a fundamental problem with multiple services exposing the same port (like 80 or 8080).It boils down (I THINK) to the mesh-routing magic - which is a level 4 four thing, meaning basically TCP/IP - in other words, IP address + port. So things get messy when multiple services are listing on (for example) port 8080. The mesh router will happily deliver traffic going to port 8080 to any services that exposes the same port.

You CAN isolate things from each other using overlay networking in swarm mode, BUT the problem comes in when you have to connect services to the proxy (overlay network) - at that point it looks like things get mixed up (and this is where I am now having difficulties).

The solution I have at this point is to let the services that need to be exposed to the net use ports unique as far as the proxy-facing (overlay) network is concerned (they do NOT have to be published to the swarm!), and then actually use something like the Docker Flow Proxy to handle incoming traffic on the desired port.

Quick sample to get you I started (roughly based on this:

    docker network create --driver overlay proxy    docker network create --driver overlay my-app    # App1 exposed port 8081    docker service create --network proxy --network my-app --name app1 myApp1DockerImage    docker service create --name proxy \    -p 80:80 \    -p 443:443 \    -p 8080:8080 \    --network proxy \    -e MODE=swarm \    vfarcic/docker-flow-proxy    #App2 exposes port 8080    docker service create --network proxy --network my-app --name app2 myApp2DockerImage

You then configure the reverseProxy as per it's documentation.

NOTE: I see now there is new AUTO configuration available - I have not yet tried this.

End result if everything worked:

  • proxy listening on ports 80, 443 (and 8080 for it's config calls, so keep that OFF the public net!)
  • proxy forwards to appropriate service,based either on service domain or service path (I had issues with service path)
  • services can communicated internally over isolated overlay network.
  • services do not publish ports unnecessarily to the swarm

[EDIT 2016/10/20]

Ignore all the stuff above about issues with the same exposed port on the same overlay network attached to the proxy.

I tore down my hole setup, and started again - everything is working as expected now: I can access multiple (different) services on port 80, using different domains, via the docker flow proxy.

Also using the auto-configuration mentioned - everything is working like a charm.


If you need to expose both API and Web interface to public, you have two options. Either use different port for the services

http://my-site.com       # Web interfacehttp://my-site.com:8080  # API

or use a proxy that listens port 80 and forwards requests to correct service according to path:

http://my-site.com      # Web interfacehttp://my-site.com/api  # API


[Revisiting this after 4 years because it seems to still be getting votes and there's been a lot that's changed since the question was asked]

You can't have multiple services listening on the same port in swarm mode, or linux in general. However, you can run some kind of layer 7 proxy on the port that performs the routing to the correct container based on application level data. The most common example of this is the various http reverse proxies that exist.

Specifically with Swarm Mode, traefik seems to be the most popular reverse proxy. However, there are other solutions based on HAProxy and Nginx that also exist.

With a reverse proxy, neither of your containers would publish a port in swarm mode. Instead you would configure the reverse proxy with it's port published on something like 80 and 443. Then it would communicate requests to your containers over a shared docker network. For this to work, each container needs to be able to separate what traffic to transmit to it based on something in the http protocol, e.g. the hostname, path, cookies, etc in the request.


[Original answer]

Use different ports if they need to be publicly exposed:

docker service create -p 80:80 --name web nginx

and then

docker service create -p 8080:80 --name api myapi

In the second example, public port 8080 maps to container port 80. Of course if they don't need to be public port exposed, you can see the services between the containers on the same network by using the container name and container port.

curl http://api:80

would find a container named api and connect to port 80 using the DNS discovery for containers on the same network.