HTTPS on Elastic Beanstalk (Docker Multi-container) HTTPS on Elastic Beanstalk (Docker Multi-container) docker docker

HTTPS on Elastic Beanstalk (Docker Multi-container)


This is more of an idea (I haven't actually done this and not sure if it would work). But the components appear to be all available to create a ALB that could direct traffic to one process or another based on path rules.

Here is what I am thinking that could be done via .ebextensions config files based on the options available from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html:

  1. Use aws:elasticbeanstalk:environment:process:default to make sure the default application port and health check is set the way you intend (let's say port 80 is your default in this case.
  2. Use aws:elasticbeanstalk:environment:process:process_name to create a backend process that goes to your second service (port 4000 in this case).
  3. Create a rule for your backend with aws:elbv2:listenerrule:backend which would use something like /backend/* as the path.
  4. Create the SSL listener with aws:elbv2:listener:443 (example at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.html) that uses this new backend rule.

I am not sure if additional rules need to be created for the default listener of aws:elbv2:listener:default. It seems like the default might just match /* so in this case anything sent to /backend/* would go to port 4000 container and anything else goes to the port 3000 container.


You will definitely need an nginx container, for the simple fact that a multicontainer ELB setup does not provide one by default. The reason that you see a single container setup on ELB with these .ebextension configs, is that for this type of setup the ELB does provide nginx.

The benefit of having your own nginx container is that you won't need a frontend container (assuming you are serving static files). You can write our nginx config so that you serve static files straight.

Here is my Dockerrun file:

{  "AWSEBDockerrunVersion": 2,  "volumes": [      {          "name": "dist",          "host": {              "sourcePath": "/var/app/current/frontend/dist"          }      },      {        "name": "nginx-proxy-conf",        "host": {            "sourcePath": "/var/app/current/compose/production/nginx/nginx.conf"        }      }  ],  "containerDefinitions": [    {      "name": "backend",      "image": "abc/xyz",      "essential": true,      "memory": 256,    },    {      "name": "nginx-proxy",      "image": "nginx:latest",      "essential": true,      "memory": 128,      "portMappings": [        {          "hostPort": 80,          "containerPort": 80        }      ],      "depends_on": ["backend"],      "links": [        "backend"      ],      "mountPoints": [        {          "sourceVolume": "dist",          "containerPath": "/var/www/app/frontend/dist",          "readOnly": true        },        {          "sourceVolume": "awseb-logs-nginx-proxy",          "containerPath": "/var/log/nginx"        },        {          "sourceVolume": "nginx-proxy-conf",          "containerPath": "/etc/nginx/nginx.conf",          "readOnly": true        }      ]    }  ]}

I also highly recommend to use AWS services for setting up your SSL: Route 53 and Certificate manager. They play nice together and if I understand correctly, it allows you to apply SSL on load balancing level.