Concat MP3/media audio files on amazon S3 server Concat MP3/media audio files on amazon S3 server express express

Concat MP3/media audio files on amazon S3 server


You can achieve what you want by breaking it into two steps:

Manipulating files on s3

Since s3 is a remote file storage, you can't run code on s3 server to do the operation locally (as @Andrey mentioned).what you will need to do in your code is to fetch each input file, process them locally and upload the results back to s3. checkout the code examples from amazon:

var s3 = new AWS.S3();var params = {Bucket: 'myBucket', Key: 'mp3-input1.mp3'};var file = require('fs').createWriteStream('/path/to/input.mp3');s3.getObject(params).createReadStream().pipe(file);

at this stage you'll run your concatenation code, and upload the results back:

var fs = require('fs');var zlib = require('zlib');var body = fs.createReadStream('bigfile.mp3').pipe(zlib.createGzip());var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});s3obj.upload({Body: body}).   on('httpUploadProgress', function(evt) { console.log(evt); }).   send(function(err, data) { console.log(err, data) });

Merging two (or more) mp3 files

Since MP3 file include a header that specifies some information like bitrate, simply concatenating them together might introduce playback issues.See: https://stackoverflow.com/a/5364985/1265980

what you want to use a tool to that. you can have one approach of saving your input mp3 files in tmp folder, and executing an external program like to change the bitrate, contcatenate files and fix the header.alternatively you can use an library that allows you to use ffmpeg within node.js.

in their code example shown, you can see how their merge two files together within the node api.

ffmpeg('/path/to/part1.avi')  .input('/path/to/part2.avi')  .input('/path/to/part2.avi')  .on('error', function(err) {    console.log('An error occurred: ' + err.message);  })  .on('end', function() {    console.log('Merging finished !');  })  .mergeToFile('/path/to/merged.avi', '/path/to/tempDir'); 


Here's my quick take on the problem of downloading and processing S3 objects. My example is focused mostly on getting the data local and then processing it once it's all downloaded. I suggest you use one of the ffmpeg approaches mentioned above.

var RSVP = require('rsvp');var s3 = new AWS.S3();var bucket = '<your bucket name>';var getFile = function(key, filePath) {    return new RSVP.Promise(function(resolve, reject) {        var file = require('fs').createWriteStream(filePath);        if(!file) {            reject('unable to open file');        }        s3.getObject({            Bucket: bucket,            Key: key        }).on('httpData', function(chunk) {            file.write(chunk);        }).on('httpDone', function() {            file.end();            resolve(filePath);        });    });};var tempFiles = ['<local temp filename 1>', '<local temp filename 2>'];var keys = ['<s3 object key 1>', '<s3 object key 2>'];var promises = [];for(var i = 0; i < keys.length; ++i) {    var promise = getFile(keys[i], tempFiles[i]);    promises.push(promise);}RSVP.all(promises).then(function(data) {    //do something with your files}).catch(function(error) {    //handle errors});