Luma Key (create alpha mask from image) for iOS Luma Key (create alpha mask from image) for iOS ios ios

Luma Key (create alpha mask from image) for iOS


There is a reason why typically blue or green screens are used in movie production for chroma keying, instead of white. Anything can be white or sufficiently close to white in a photo, especially eyes or highlights or just parts of the skin. Also, it is quite hard to find an uniform white wall without shadows, cast by your subject at least.I would recommend building a histogram, finding the most frequently used color among the brightest ones, then search for the biggest area of that color using some threshold. Then do a flood fill from that area until sufficiently different colors are encountered. All of that can be quite easily done in software, unless you want a realtime video stream.


So, to change a white to transparent we can use this method:

-(UIImage *)changeWhiteColorTransparent: (UIImage *)image {    CGImageRef rawImageRef=image.CGImage;    const float colorMasking[6] = {222, 255, 222, 255, 222, 255};    UIGraphicsBeginImageContext(image.size);    CGImageRef maskedImageRef=CGImageCreateWithMaskingColors(rawImageRef, colorMasking);    {        //if in iphone        CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, image.size.height);        CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);     }    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), maskedImageRef);    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();    CGImageRelease(maskedImageRef);    UIGraphicsEndImageContext();        return result;}

and to replace the non transparent pixels with black, we can use:

- (UIImage *) changeColor: (UIImage *)image {    UIGraphicsBeginImageContext(image.size);    CGRect contextRect;    contextRect.origin.x = 0.0f;    contextRect.origin.y = 0.0f;    contextRect.size = [image size];    // Retrieve source image and begin image context    CGSize itemImageSize = [image size];    CGPoint itemImagePosition;    itemImagePosition.x = ceilf((contextRect.size.width - itemImageSize.width) / 2);    itemImagePosition.y = ceilf((contextRect.size.height - itemImageSize.height) );    UIGraphicsBeginImageContext(contextRect.size);    CGContextRef c = UIGraphicsGetCurrentContext();    // Setup shadow    // Setup transparency layer and clip to mask    CGContextBeginTransparencyLayer(c, NULL);    CGContextScaleCTM(c, 1.0, -1.0);    CGContextClipToMask(c, CGRectMake(itemImagePosition.x, -itemImagePosition.y, itemImageSize.width, -itemImageSize.height), [image CGImage]);    CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);    contextRect.size.height = -contextRect.size.height;    contextRect.size.height -= 15;    // Fill and end the transparency layer    CGContextFillRect(c, contextRect);    CGContextEndTransparencyLayer(c);    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();    UIGraphicsEndImageContext();    return img;}

so in practice this would be:

-(UIImage *)silhouetteForImage:(UIImage *)img {    return [self changeColour:[self changeWhiteColorTransparent:img]];}

Obviously you would call this in a background thread, to keep everything running smoothly.


Having a play with Quartz composer and the CoreImage filters may help you. I believe that this code should make you a silhouette:

- (CGImageRef)silhouetteOfImage:(UIImage *)input{  CIContext *ciContext = [CIContext contextWithOptions:nil];  CIImage *ciInput = [CIImage imageWithCGImage:[input CGImage]];  CIFilter *filter = [CIFilter filterWithName:@"CIFalseColor"];  [filter setValue:[CIColor colorWithRed:0.0 green:0.0 blue:0.0 alpha:1.0] forKey:@"inputColor0"];  [filter setValue:[CIColor colorWithRed:0.0 green:0.0 blue:0.0 alpha:0.0] forKey@"inputColor1"];  [filter setValue:ciInput forKey:kCIInputImageKey];  CIImage *outImage = [filter valueForKey:kCIOutputImageKey];  CGImageRef result = [ciContext createCGImage:outImage fromRect:[ciInput extent]];  return result;}