Multiple videos with AVPlayer Multiple videos with AVPlayer ios ios

Multiple videos with AVPlayer


So the project is now live in the App Store and it is time to come back to this thread and share my findings and reveal what I ended up doing.

What did not work

The first option where I used on big AVComposition with all the videos in it was not good enough since there was no way to jump to a specific time in the composition without having a small scrubbing glitch. Further I had a problem with pausing the video exactly between two videos in the composition since the API could not provide frame guarantee for pausing.

The third with having two AVPlayers and let them take turns worked great in practice. Exspecially on iPad 4 or iPhone 5. Devices with lower amount of RAM was a problem though since having several videos in memory at the same time consumed too much memory. Especially since I had to deal with videos of very high resolution.

What I ended up doing

Well, left was option number 2. Creating an AVPlayerItem for a video when needed and feeding it to the AVPlayer. The good thing about this solution was the memory consumption. By lazy creating the AVPlayerItems and throwing them away the moment they were not longer needed in could keep memory consumption to a minimum, which was very important in order to support older devices with limited RAM. The problem with this solution was that when going from one video to the next there a blank screen for at quick moment while the next video was loaded into memory. My idea of fixing this was to put an image behind the AVPlayer that would show when the player was buffering. I knew I needed images that we exactly pixel to pixel perfect with the video, so I captured images that were exact copies of the last and first frame of the videos. This solution worked great in practice.

The problem with this solution

I had the issue though that the position of the image inside the UIImageView was not the same as the position of the video inside the AVPlayer if the video/image was not its native size or a module 4 scaling of that. Said other words, I had a problem with how half pixels were handled with a UIImageView and a AVPlayer. It did not seem to be the same way.

How I fixed it

I tried a lot of stuff since I my application was using the videos in interactive way where shown in different sizes. I tried changed the magnificationfilter and minificationFilter of AVPlayerLayer and CALayer to use the same algorithm but did not really change anything. In the end I ended up creating an iPad app that automatically could take screenshots of the videos in all the sizes I needed and then use the right image when the video was scaled to a certain size. This gave images that were pixel perfect in all of the sizes I was showing a specific video. Not a perfect toolchain, but the result was perfect.

Final reflection

The main reason why this position problem was very visible for me (and therefore very important to solve), was because the video content my app is playing is drawn animations, where a lot the content is in a fixed position and only a part of the picture is moving. If all the content is moved just one pixel it gives a very visible and ugly glitch. At WWDC this year I discussed this problem with an Apple engineer that is an expert in AVFoundation. When I introduced the problem to him his suggestion basically was to go with option 3, but I explained to him that that was not possible because memory consumption and that I already tried that solution out. In that light he said that I choose the right solution and asked me to file a bug report for the UIImage/AVPlayer positioning when video is scaled.


You may have looked at this already, but have you checked out AVQueuePlayer Documentation

It is designed for playing AVPlayerItems in a queue and is a direct subclass of AVPlayer so just use it in the same way. You set it up like follows:

AVPlayerItem *firstItem = [AVPlayerItem playerItemWithURL: firstItemURL];AVPlayerItem *secondItem = [AVPlayerItem playerItemWithURL: secondItemURL];AVQueuePlayer *player = [AVQueuePlayer queuePlayerWithItems:[NSArray arrayWithObjects:firstItem, secondItem, nil]];[player play];

If you want to add new items to the queue at runtime just use this method:

[player insertItem:thirdPlayerItem afterItem:firstPlayerItem];

I haven't tested to see if this reduces the flickering issue you have mentioned but it seems like this would be the way to go.


Update — https://youtu.be/7QlaO7WxjGg

Here's your answer using a collection view as an example, which will play 8 at a time (note that no memory management of any kind is necessary; you may use ARC):

    - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath {    UICollectionViewCell *cell = (UICollectionViewCell *)[collectionView dequeueReusableCellWithReuseIdentifier:kCellIdentifier forIndexPath:indexPath];    // Enumerate array of visible cells' indexPaths to find a match    if ([self.collectionView.indexPathsForVisibleItems         indexOfObjectPassingTest:^BOOL(NSIndexPath * _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {             return (obj.item == indexPath.item);         }]) dispatch_async(dispatch_get_main_queue(), ^{             [self drawLayerForPlayerForCell:cell atIndexPath:indexPath];         });    return cell;}- (void)drawPosterFrameForCell:(UICollectionViewCell *)cell atIndexPath:(NSIndexPath *)indexPath {    [self.imageManager requestImageForAsset:AppDelegate.sharedAppDelegate.assetsFetchResults[indexPath.item]                                                          targetSize:AssetGridThumbnailSize                                                         contentMode:PHImageContentModeAspectFill                                                             options:nil                                                       resultHandler:^(UIImage *result, NSDictionary *info) {                                                           cell.contentView.layer.contents = (__bridge id)result.CGImage;                                                       }];}- (void)drawLayerForPlayerForCell:(UICollectionViewCell *)cell atIndexPath:(NSIndexPath *)indexPath {    cell.contentView.layer.sublayers = nil;    [self.imageManager requestPlayerItemForVideo:(PHAsset *)self.assetsFetchResults[indexPath.item] options:nil resultHandler:^(AVPlayerItem * _Nullable playerItem, NSDictionary * _Nullable info) {        dispatch_sync(dispatch_get_main_queue(), ^{            if([[info objectForKey:PHImageResultIsInCloudKey] boolValue]) {                [self drawPosterFrameForCell:cell atIndexPath:indexPath];            } else {                AVPlayerLayer *playerLayer = [AVPlayerLayer playerLayerWithPlayer:[AVPlayer playerWithPlayerItem:playerItem]];                [playerLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];                [playerLayer setBorderColor:[UIColor whiteColor].CGColor];                [playerLayer setBorderWidth:1.0f];                [playerLayer setFrame:cell.contentView.layer.bounds];                [cell.contentView.layer addSublayer:playerLayer];                [playerLayer.player play];            }        });    }];}

The drawPosterFrameForCell method places an image where a video cannot be played because it is stored on iCloud, and not the device.

Anyway, this is the starting point; once you understand how this works, you can do all the things you wanted, without any of the glitches, memory-wise, that you described.