Issues with Docker Swarm running TeamCity using rexray/ebs for drive persistence in AWS EBS Issues with Docker Swarm running TeamCity using rexray/ebs for drive persistence in AWS EBS docker docker

Issues with Docker Swarm running TeamCity using rexray/ebs for drive persistence in AWS EBS


I found a way to get logs for my service. First do this to list the services the stack creates:

$ sudo docker service ls 

Then do this to see logs for the service:

$ sudo docker service logs --details {service name}

Now I just need to wade through the logs and see what went wrong...


Update

I found the following error in the logs:

infra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  [2018-05-14 17:38:56,849]  ERROR - r.configs.dsl.DslPluginManager - DSL plugin compilation failedinfra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  exit code: 1infra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  stdout: #infra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  # There is insufficient memory for the Java Runtime Environment to continue.infra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  # Native memory allocation (mmap) failed to map 42012672 bytes for committing reserved memory.infra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  # An error report file with more information is saved as:infra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  # /opt/teamcity/bin/hs_err_pid125.loginfra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |infra_teamcity.1.bhiwz74gnuio@ip-172-31-18-103    |  stderr: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e2dfe000, 42012672, 0) failed; error='Cannot allocate memory' (errno=12)

Which is making me think this is a memory problem. I'm going to try this again with a better AWS instance and see how I get on.


Update 2

Using a larger AWS instance solved the issue. :)

I then discovered that rexray/ebs doesn't like it when a container switches between hosts in my swarm - it duplicates the EBS volumes so that it keeps one per machine. My solution to this was to use an EFS drive in AWS and mount it to each possible host. I then updated the fstab file so that the drive is remounted on every reboot. Job done. Now to look into using a reverse proxy...