How can I remotely connect to docker swarm? How can I remotely connect to docker swarm? docker docker

How can I remotely connect to docker swarm?


Answer to the question can be found here.

What one needs to do for ubuntu machine is define daemon.json file at path /etc/docker with following content:

{  "hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]}

The above configuration is unsecured and shouldn't be used if server is publicly hosted.

For secured connection use following config:

{  "tls": true,  "tlscert": "/var/docker/server.pem",  "tlskey": "/var/docker/serverkey.pem",  "hosts": ["tcp://x.x.x.y:2376", "unix:///var/run/docker.sock"]}

Details for generating certificate can be found here as mentioned by @BMitch.


One option is to provide direct access to the docker daemon as suggested in the previous answers, but that requires setting up TLS certificates and keys, which can itself be tricky and time consuming. Docker machine can automate that process, when docker machine has been used to create the nodes.

I had the same problem, in that I wanted to create secrets on the swarm without uploading the file containing the secret to the swarm manager. Also, I wanted to be able to run the deploy stackfile (e.g. docker-compose.yml) without the hassle of first uploading the stackfile.

I wanted to be able to create the few servers I needed on e.g. DigitalOcean, not necessarily using docker machine, and be able to reproducibly create the secrets and run the stackfile. In environments like DigitalOcean and AWS, a separate set of TLS certificates is not used, but rather the ssh key on the local machine is used to access the remote node over ssh.

The solution that worked for me was to run the docker commands using individual commands over ssh, which allows me to pipe the secret and/or stackfile using stdin.

To do this, you first need to create the DigitalOcean droplets and get docker installed on them, possibly from a custom image or snapshot, or simply running the commands to install docker on each droplet. Then, join the droplets into a swarm: ssh into the one that will be the manager node, type docker swarm init (possibly with the --advertise-addr option if there is more than one IP on that node, such as when you want to keep intra-swarm traffic on the private network) and get back the join command for the swarm. Then ssh into each of the other nodes and issue the join command, and your swarm is created.

Then, export the ssh command you will need to issue commands on the manager node, like

export SSH_CMD='ssh root@159.89.98.121'

Now, you have a couple of options. You can issue individual docker commands like:

$SSH_CMD docker service ls

You can create a secret on your swarm without copying the secret file to the swarm manager:

$SSH_CMD docker create secret my-secret - < /path/to/local/file$SSH_CMD docker service create --name x --secrets my-secret image

(Using - to indicate that docker create secret should accept the secret on stdin, and then piping the file to stdin using ssh)

You can also create a script to be able to reproducibly run commands to create your secrets and bring up your stack with secret files and stackfiles only on your local machine. Such a script might be:

$SSH_CMD docker secret create rabbitmq.config.01 - < rabbitmq/rabbitmq.config$SSH_CMD docker secret create enabled_plugins.01 - < rabbitmq/enabled_plugins$SSH_CMD docker secret create rmq_cacert.pem.01 - < rabbitmq/cacert.pem$SSH_CMD docker secret create rmq_cert.pem.01 - < rabbitmq/cert.pem$SSH_CMD docker secret create rmq_key.pem.01 - < rabbitmq/key.pem$SSH_CMD docker stack up -c - rabbitmq_stack < rabbitmq.yml

where secrets are used for the certs and keys, and also for the configuration files rabbitmq.config and enabled_plugins, and the stackfile is rabbitmq.yml, which could be:

version: '3.1'services:  rabbitmq:    image: rabbitmq    secrets:      - source: rabbitmq.config.01        target: /etc/rabbitmq/rabbitmq.config      - source: enabled_plugins.01        target: /etc/rabbitmq/enabled_plugins      - source: rmq_cacert.pem.01        target: /run/secrets/rmq_cacert.pem      - source: rmq_cert.pem.01        target: /run/secrets/rmq_cert.pem      - source: rmq_key.pem.01        target: /run/secrets/rmq_key.pem    ports:       # stomp, ssl:      - 61614:61614      # amqp, ssl:      - 5671:5671      # monitoring, ssl:      - 15671:15671      # monitoring, non ssl:      - 15672:15672  # nginx here is only to show another service in the stackfile  nginx:    image: nginx    ports:       - 80:80secrets:  rabbitmq.config.01:    external: true  rmq_cacert.pem.01:    external: true  rmq_cert.pem.01:    external: true  rmq_key.pem.01:    external: true  enabled_plugins.01:    external: true

(Here, the rabbitmq.config file sets up the ssh listening ports for stomp, amqp, and the monitoring interface, and tells rabbitmq to look for the certs and key within /run/secrets. Another alternative for this specific image would be to use the environment variables provided by the image to point to the secrets files, but I wanted a more generic solution that did not require configuration within the image)

Now, if you want to bring up another swarm, your script will work with that swarm once you have set the SSH_CMD environment variable, and you need neither set up TLS nor copy your secret or stackfiles to the swarm filesystem.

So, this doesn't solve the problem of creating the swarm (whose existence was presupposed by your question), but once it is created, using an environment variable (exported if you want to use it in scripts) will allow you to use almost exactly the commands you listed, prefixed with that environment variable.


This is the easiest way of running commands on remote docker engine:

docker context create --docker host=ssh://myuser@myremote myremotedocker --context myremote ps -adocker --context myremote create secret my-secret <address to local file>docker --context myremote service create --name x --secrets my-secret image

or

docker --host ssh://myuser@myremote ps -a

You need to use key based authentication for this do work (you should already be using it). Other options include setting up tls cert based socket, or ssh tunnels.