How do I export UIImage array as a movie? How do I export UIImage array as a movie? ios ios

How do I export UIImage array as a movie?


Take a look at AVAssetWriter and the rest of the AVFoundation framework. The writer has an input of type AVAssetWriterInput, which in turn has a method called appendSampleBuffer: that lets you add individual frames to a video stream. Essentially you’ll have to:

1) Wire the writer:

NSError *error = nil;AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:    [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie    error:&error];NSParameterAssert(videoWriter);NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:    AVVideoCodecH264, AVVideoCodecKey,    [NSNumber numberWithInt:640], AVVideoWidthKey,    [NSNumber numberWithInt:480], AVVideoHeightKey,    nil];AVAssetWriterInput* writerInput = [[AVAssetWriterInput    assetWriterInputWithMediaType:AVMediaTypeVideo    outputSettings:videoSettings] retain]; //retain should be removed if ARCNSParameterAssert(writerInput);NSParameterAssert([videoWriter canAddInput:writerInput]);[videoWriter addInput:writerInput];

2) Start a session:

[videoWriter startWriting];[videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure

3) Write some samples:

// Or you can use AVAssetWriterInputPixelBufferAdaptor.// That lets you feed the writer input data from a CVPixelBuffer// that’s quite easy to create from a CGImage.[writerInput appendSampleBuffer:sampleBuffer];

4) Finish the session:

[writerInput markAsFinished];[videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifying endTime[videoWriter finishWriting]; //deprecated in ios6/*[videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+*/

You’ll still have to fill-in a lot of blanks, but I think that the only really hard remaining part is getting a pixel buffer from a CGImage:

- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image{    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:        [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,        [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,        nil];    CVPixelBufferRef pxbuffer = NULL;    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,        frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,         &pxbuffer);    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);    CVPixelBufferLockBaseAddress(pxbuffer, 0);    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);    NSParameterAssert(pxdata != NULL);    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();    CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,        frameSize.height, 8, 4*frameSize.width, rgbColorSpace,         kCGImageAlphaNoneSkipFirst);    NSParameterAssert(context);    CGContextConcatCTM(context, frameTransform);    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),         CGImageGetHeight(image)), image);    CGColorSpaceRelease(rgbColorSpace);    CGContextRelease(context);    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);    return pxbuffer;}

frameSize is a CGSize describing your target frame size and frameTransform is a CGAffineTransform that lets you transform the images when you draw them into frames.


Here is the latest working code on iOS8 in Objective-C.

We had to make a variety of tweaks to @Zoul's answer above to get it to work on the latest version of Xcode and iOS8. Here is our complete working code that takes an array of UIImages, makes them into a .mov file, saves it to a temp directory, then moves it to the camera roll. We assembled code from multiple different posts to get this working. We have highlighted the traps we had to solve to get the code working in our comments.

(1) Create a collection of UIImages

[self saveMovieToLibrary]- (IBAction)saveMovieToLibrary{    // You just need the height and width of the video here    // For us, our input and output video was 640 height x 480 width    // which is what we get from the iOS front camera    ATHSingleton *singleton = [ATHSingleton singletons];    int height = singleton.screenHeight;    int width = singleton.screenWidth;    // You can save a .mov or a .mp4 file            //NSString *fileNameOut = @"temp.mp4";    NSString *fileNameOut = @"temp.mov";    // We chose to save in the tmp/ directory on the device initially    NSString *directoryOut = @"tmp/";    NSString *outFile = [NSString stringWithFormat:@"%@%@",directoryOut,fileNameOut];    NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:outFile]];    NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), fileNameOut]];    // WARNING: AVAssetWriter does not overwrite files for us, so remove the destination file if it already exists    NSFileManager *fileManager = [NSFileManager defaultManager];    [fileManager removeItemAtPath:[videoTempURL path]  error:NULL];    // Create your own array of UIImages            NSMutableArray *images = [NSMutableArray array];    for (int i=0; i<singleton.numberOfScreenshots; i++)    {        // This was our routine that returned a UIImage. Just use your own.        UIImage *image =[self uiimageFromCopyOfPixelBuffersUsingIndex:i];        // We used a routine to write text onto every image         // so we could validate the images were actually being written when testing. This was it below.         image = [self writeToImage:image Text:[NSString stringWithFormat:@"%i",i ]];        [images addObject:image];         }// If you just want to manually add a few images - here is code you can uncomment// NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/movie.mp4"]];//    NSArray *images = [[NSArray alloc] initWithObjects://                      [UIImage imageNamed:@"add_ar.png"],//                      [UIImage imageNamed:@"add_ja.png"],//                      [UIImage imageNamed:@"add_ru.png"],//                      [UIImage imageNamed:@"add_ru.png"],//                      [UIImage imageNamed:@"add_ar.png"],//                      [UIImage imageNamed:@"add_ja.png"],//                      [UIImage imageNamed:@"add_ru.png"],//                      [UIImage imageNamed:@"add_ar.png"],//                      [UIImage imageNamed:@"add_en.png"], nil];    [self writeImageAsMovie:images toPath:path size:CGSizeMake(height, width)];}

This is the main method that creates your AssetWriter and adds images to it for writing.

(2) Wire up an AVAssetWriter

-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size{    NSError *error = nil;    // FIRST, start up an AVAssetWriter instance to write your video    // Give it a destination path (for us: tmp/temp.mov)    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]                                                           fileType:AVFileTypeQuickTimeMovie                                                              error:&error];    NSParameterAssert(videoWriter);    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:                                   AVVideoCodecH264, AVVideoCodecKey,                                   [NSNumber numberWithInt:size.width], AVVideoWidthKey,                                   [NSNumber numberWithInt:size.height], AVVideoHeightKey,                                   nil];    AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo                                                                         outputSettings:videoSettings];    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                                                                     sourcePixelBufferAttributes:nil];    NSParameterAssert(writerInput);    NSParameterAssert([videoWriter canAddInput:writerInput]);    [videoWriter addInput:writerInput];

(3) Start a writing Session (NOTE: the method is continuing from above)

    //Start a SESSION of writing.     // After you start a session, you will keep adding image frames     // until you are complete - then you will tell it you are done.    [videoWriter startWriting];    // This starts your video at time = 0    [videoWriter startSessionAtSourceTime:kCMTimeZero];    CVPixelBufferRef buffer = NULL;    // This was just our utility class to get screen sizes etc.        ATHSingleton *singleton = [ATHSingleton singletons];    int i = 0;    while (1)    {        // Check if the writer is ready for more data, if not, just wait        if(writerInput.readyForMoreMediaData){            CMTime frameTime = CMTimeMake(150, 600);            // CMTime = Value and Timescale.            // Timescale = the number of tics per second you want            // Value is the number of tics            // For us - each frame we add will be 1/4th of a second            // Apple recommend 600 tics per second for video because it is a             // multiple of the standard video rates 24, 30, 60 fps etc.            CMTime lastTime=CMTimeMake(i*150, 600);            CMTime presentTime=CMTimeAdd(lastTime, frameTime);            if (i == 0) {presentTime = CMTimeMake(0, 600);}             // This ensures the first frame starts at 0.            if (i >= [array count])            {                buffer = NULL;            }            else            {                // This command grabs the next UIImage and converts it to a CGImage                buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]];            }            if (buffer)            {                // Give the CGImage to the AVAssetWriter to add to your video                [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];                i++;            }            else            {

(4) Finish the Session (Note: Method continues from above)

                //Finish the session:                // This is important to be done exactly in this order                [writerInput markAsFinished];                // WARNING: finishWriting in the solution above is deprecated.                 // You now need to give a completion handler.                [videoWriter finishWritingWithCompletionHandler:^{                    NSLog(@"Finished writing...checking completion status...");                    if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)                    {                        NSLog(@"Video writing succeeded.");                        // Move video to camera roll                        // NOTE: You cannot write directly to the camera roll.                         // You must first write to an iOS directory then move it!                        NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@", path]];                        [self saveToCameraRoll:videoTempURL];                    } else                    {                        NSLog(@"Video writing failed: %@", videoWriter.error);                    }                }]; // end videoWriter finishWriting Block                CVPixelBufferPoolRelease(adaptor.pixelBufferPool);                NSLog (@"Done");                break;            }        }    }    }

(5) Convert your UIImages to a CVPixelBufferRef
This method will give you a CV pixel buffer reference which is needed by the AssetWriter. This is obtained from a CGImageRef which you get from your UIImage (above).

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image{    // This again was just our utility class for the height & width of the    // incoming video (640 height x 480 width)    ATHSingleton *singleton = [ATHSingleton singletons];    int height = singleton.screenHeight;    int width = singleton.screenWidth;    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,                             nil];    CVPixelBufferRef pxbuffer = NULL;    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,                                          height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,                                          &pxbuffer);    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);    CVPixelBufferLockBaseAddress(pxbuffer, 0);    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);    NSParameterAssert(pxdata != NULL);    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();    CGContextRef context = CGBitmapContextCreate(pxdata, width,                                                 height, 8, 4*width, rgbColorSpace,                                                 kCGImageAlphaNoneSkipFirst);    NSParameterAssert(context);    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),                                           CGImageGetHeight(image)), image);    CGColorSpaceRelease(rgbColorSpace);    CGContextRelease(context);    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);    return pxbuffer;}

(6) Move Your Video to the Camera Roll Because AVAssetWriter cannot write directly to the camera roll, this moves the video from "tmp/temp.mov" (or whatever filename you named it above) to the camera roll.

- (void) saveToCameraRoll:(NSURL *)srcURL{    NSLog(@"srcURL: %@", srcURL);    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];    ALAssetsLibraryWriteVideoCompletionBlock videoWriteCompletionBlock =    ^(NSURL *newURL, NSError *error) {        if (error) {            NSLog( @"Error writing image with metadata to Photo Library: %@", error );        } else {            NSLog( @"Wrote image with metadata to Photo Library %@", newURL.absoluteString);        }    };    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:srcURL])    {        [library writeVideoAtPathToSavedPhotosAlbum:srcURL                                    completionBlock:videoWriteCompletionBlock];    }}

Zoul's answer above gives a nice outline of what you will be doing. We extensively commented this code so you can then see how it was done using working code.


Update To Swift 5

Last week I set out to write the iOS code to generate a video from images. I had a little bit of AVFoundation experience, but had never even heard of a CVPixelBuffer. I came across the answers on this page and also here. It took several days to dissect everything and put it all back together in Swift in a way that made sense to my brain. Below is what I came up with.

NOTE: If you copy/paste all the code below into a single Swift file, it should compile. You'll just need to tweak loadImages() and the RenderSettings values.

Part 1: Setting things up

Here I group all the export-related settings into a single RenderSettings struct.

import AVFoundationimport UIKitimport Photosstruct RenderSettings {var size : CGSize = .zerovar fps: Int32 = 6   // frames per secondvar avCodecKey = AVVideoCodecType.h264var videoFilename = "render"var videoFilenameExt = "mp4"var outputURL: URL {    // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.    // Using the CachesDirectory ensures the file won't be included in a backup of the app.    let fileManager = FileManager.default    if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {        return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)    }    fatalError("URLForDirectory() failed")}

Part 2: The ImageAnimator

The ImageAnimator class knows about your images and uses the VideoWriter class to perform the rendering. The idea is to keep the video content code separate from the low-level AVFoundation code. I also added saveToLibrary() here as a class function which gets called at the end of the chain to save the video to the Photo Library.

class ImageAnimator {// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.static let kTimescale: Int32 = 600let settings: RenderSettingslet videoWriter: VideoWritervar images: [UIImage]!var frameNum = 0class func saveToLibrary(videoURL: URL) {    PHPhotoLibrary.requestAuthorization { status in        guard status == .authorized else { return }        PHPhotoLibrary.shared().performChanges({            PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)        }) { success, error in            if !success {                print("Could not save video to photo library:", error)            }        }    }}class func removeFileAtURL(fileURL: URL) {    do {        try FileManager.default.removeItem(atPath: fileURL.path)    }    catch _ as NSError {        // Assume file doesn't exist.    }}init(renderSettings: RenderSettings) {    settings = renderSettings    videoWriter = VideoWriter(renderSettings: settings)    //images = loadImages()}func render(completion: (()->Void)?) {    // The VideoWriter will fail if a file exists at the URL, so clear it out first.    ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)    videoWriter.start()    videoWriter.render(appendPixelBuffers: appendPixelBuffers) {        ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)        completion?()    }}// This is the callback function for VideoWriter.render()func appendPixelBuffers(writer: VideoWriter) -> Bool {    let frameDuration = CMTimeMake(value: Int64(ImageAnimator.kTimescale / settings.fps), timescale: ImageAnimator.kTimescale)    while !images.isEmpty {        if writer.isReadyForData == false {            // Inform writer we have more buffers to write.            return false        }        let image = images.removeFirst()        let presentationTime = CMTimeMultiply(frameDuration, multiplier: Int32(frameNum))        let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)        if success == false {            fatalError("addImage() failed")        }        frameNum += 1    }    // Inform writer all buffers have been written.    return true}

Part 3: The VideoWriter

The VideoWriter class does all AVFoundation heavy lifting. It's mostly a wrapper around AVAssetWriter and AVAssetWriterInput. It also contains fancy code written by not me that knows how to translate an image into a CVPixelBuffer.

class VideoWriter {let renderSettings: RenderSettingsvar videoWriter: AVAssetWriter!var videoWriterInput: AVAssetWriterInput!var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!var isReadyForData: Bool {    return videoWriterInput?.isReadyForMoreMediaData ?? false}class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {    var pixelBufferOut: CVPixelBuffer?    let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)    if status != kCVReturnSuccess {        fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")    }    let pixelBuffer = pixelBufferOut!    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))    let data = CVPixelBufferGetBaseAddress(pixelBuffer)    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()    let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),                            bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)    context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))    let horizontalRatio = size.width / image.size.width    let verticalRatio = size.height / image.size.height    //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill    let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit    let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)    let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0    let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0    context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))    CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))    return pixelBuffer}init(renderSettings: RenderSettings) {    self.renderSettings = renderSettings}func start() {    let avOutputSettings: [String: Any] = [        AVVideoCodecKey: renderSettings.avCodecKey,        AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),        AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))    ]    func createPixelBufferAdaptor() {        let sourcePixelBufferAttributesDictionary = [            kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),            kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),            kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))        ]        pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,                                                                  sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)    }    func createAssetWriter(outputURL: URL) -> AVAssetWriter {        guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mp4) else {            fatalError("AVAssetWriter() failed")        }        guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {            fatalError("canApplyOutputSettings() failed")        }        return assetWriter    }    videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)    videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)    if videoWriter.canAdd(videoWriterInput) {        videoWriter.add(videoWriterInput)    }    else {        fatalError("canAddInput() returned false")    }    // The pixel buffer adaptor must be created before we start writing.    createPixelBufferAdaptor()    if videoWriter.startWriting() == false {        fatalError("startWriting() failed")    }    videoWriter.startSession(atSourceTime: CMTime.zero)    precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")}func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {    precondition(videoWriter != nil, "Call start() to initialze the writer")    let queue = DispatchQueue(label: "mediaInputQueue")    videoWriterInput.requestMediaDataWhenReady(on: queue) {        let isFinished = appendPixelBuffers?(self) ?? false        if isFinished {            self.videoWriterInput.markAsFinished()            self.videoWriter.finishWriting() {                DispatchQueue.main.async {                    completion?()                }            }        }        else {            // Fall through. The closure will be called again when the writer is ready.        }    }}func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {    precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")    let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)    return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)}

Part 4: Make it happen

Once everything is in place, these are your 3 magic lines:

let settings = RenderSettings()let imageAnimator = ImageAnimator(renderSettings: settings)imageAnimator.render() {    print("yes")}