How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk?
The "right" way to do what I think that you want to do is to use IAM Roles. You can see a blog post about it here: http://aws.typepad.com/aws/aws-iam/
Basically, it allows you to launch an EC2 instance without putting any personal credential on any configuration file at all. When you launch the instance it will be assigned the given role (a set of permissions to use AWS resources), and a rotating credential will be put on the machine automatically with Amazon IAM.
In order to have the .ebextension/*.config
files be able to download the files from S3, they would have to be public. Given that they contain sensitive information, this is a Bad Idea.
You can launch an Elastic Beanstalk instance with an instance role, and you can give that role permission to access the files in question. Unfortunately, the file:
and sources:
sections of the .ebextension/*.config
files do not have direct access to use this role.
You should be able to write a simple script using the AWS::S3::S3Object class of the AWS SDK for Ruby to download the files, and use a command:
instead of a sources:
. If you don't specify credentials, the SDK will automatically try to use the role.
You would have to add a policy to your role which allows you to download the files you are interested in specifically. It would look like this:
{ "Statement": [ { "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::mybucket/*" } ]}
Then you could do something like this in your .config
file
files: /usr/local/bin/downloadScript.rb: http://s3.amazonaws.com/mybucket/downloadScript.rbcommands: 01-download-config: command: ruby /usr/local/downloadScript.rb http://s3.amazonaws.com/mybucket/config.tar.gz /tmp 02-unzip-config: command: tar xvf /tmp/config.tar.gz cwd: /var/app/current
It is possible (and easy) to store sensitive files in S3 and copy them to your Beanstalk instances automatically.
When you create a Beanstalk application, an S3 bucket is automatically created. This bucket is used to store app versions, logs, metadata, etc.
The default aws-elasticbeanstalk-ec2-role
that is assigned to your Beanstalk environment has read access to this bucket.
So all you need to do is put your sensitive files in that bucket (either at the root of the bucket or in any directory structure you desire), and create a .ebextension
config file to copy them over to your EC2 instances.
Here is an example:
# .ebextensions/sensitive_files.configResources: AWSEBAutoScalingGroup: Metadata: AWS::CloudFormation::Authentication: S3Auth: type: "s3" buckets: ["elasticbeanstalk-us-east-1-XXX"] # Replace with your bucket name roleName: "Fn::GetOptionSetting": Namespace: "aws:autoscaling:launchconfiguration" OptionName: "IamInstanceProfile" DefaultValue: "aws-elasticbeanstalk-ec2-role" # This is the default role created for you when creating a new Beanstalk environment. Change it if you are using a custom rolefiles: /etc/pki/tls/certs/server.key: # This is where the file will be copied on the EC2 instances mode: "000400" # Apply restrictive permissions to the file owner: root # Or nodejs, or whatever suits your needs group: root # Or nodejs, or whatever suits your needs authentication: "S3Auth" source: https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-east-1-XXX/server.key # URL to the file in S3
This is documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html