Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root? Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root? jenkins jenkins

Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root?


It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:

s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME

That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).


I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.

Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:

class hudson.plugins.s3.S3Profile, method upload:

final Destination dest = new Destination(bucketName,filePath.getName());getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);

Now if you take a look into hudson.FilePath.getName()'s JavaDoc:

Gets just the file name portion without directories.

Now, take a look into the hudson.plugins.s3.Destination's constructor:

public Destination(final String userBucketName, final String fileName) {    if (userBucketName == null || fileName == null)         throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);    final String[] bucketNameArray = userBucketName.split("/", 2);    bucketName = bucketNameArray[0];    if (bucketNameArray.length > 1) {        objectName = bucketNameArray[1] + "/" + fileName;    } else {        objectName = fileName;    }}

The Destination class JavaDoc says:

The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.

Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.


Yes this is possible.

It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.

"Source" is the file you're uploading.

"Destination bucket" is where you place your path.