Dockerfile copy files from amazon s3 or another source that needs credentials Dockerfile copy files from amazon s3 or another source that needs credentials docker docker

Dockerfile copy files from amazon s3 or another source that needs credentials


In my opinion, Roles is the best to delegate S3 permissions to Docker containers.

  1. Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.

  2. Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role

This worked successfully in Dockerfile:

RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive


Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.

Example docker with args

docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234

Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.

Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.

Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.


I wanted to build upon @Ankita Dhandha answer.

In the case of Docker you are probably looking to use ECS.

  1. IAM Roles are absolutely the way to go.
  2. When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytemfrom ubuntu:latestcurl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"unzip awscliv2.zipsudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws

Role Types

  1. EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
  2. Use the ECS Task Definition to define a Task Role and a Task Execution Role.
  3. Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
  4. Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.

General Role Propagation

In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.

reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html

  1. Plain text credentials fed into either the target system or environment variables.
  2. CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
  3. Task Execution Role used to deploy the docker task.
  4. The running task will use the Task Role.
  5. When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.

Blocking EC2 instance role access

Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.

reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/

AWS VPC Network Mode
# ecs.confgECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh# install dependenciesyum install -y aws-cli iptables-services# setup ECS dependenciesaws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config# setup IPTABLESiptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROPiptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPTservice iptables save