Best way to mirror a UIWebView Best way to mirror a UIWebView ios ios

Best way to mirror a UIWebView


As written in comments, if the main needing is to know WHEN to update the layer (and not How), I move my original answer after the "OLD ANSWER" line and add what discussed in the comments:

First (100% Apple Review Safe ;-)

  • You can take periodic "screenshots" of your original UIView and compare the resulting NSData (old and new) --> if the data is different, the layer content changed. There is no need to compare the FULL RESOLUTION screenshots, but you can do it with smaller one, to have better performance

Second: performance friendly and "theorically" review safe...but not sure :-/

I try to explain how I arrived to this code:

The main goal is to understand when TileLayer (a private subclass of CALayer used by UIWebView) becomes dirty.

The problem is that you can't access it directly. But, you can use method swizzle to change the behavior of the layerSetNeedsDisplay: method in every CALayer and subclasses.

You must be sure to avoid a radical change in the original behavior, and do only the necessary to add a "notification" when the method is called.

When you have successfully detected each layerSetNeedsDisplay: call, the only remaining thing is to understand "which is" the involved CALayer --> if it's the internal UIWebView TileLayer, we trigger an "isDirty" notification.

But we can't iterate through the UIWebView content and find the TileLayer, for example simply using "isKindOfClass:[TileLayer class]" will sure give you a rejection (Apple uses a static analyzer to check the use of private API). What can you do?

Something tricky like...for example...compare the involved layer size (the one that is calling layerSetNeedsDisplay:) with the UIWebView size? ;-)

Moreover, sometimes the UIWebView changes the child TileLayer and use a new one, so you have to do this check more times.

Last thing: layerSetNeedsDisplay: is not always called when you simply scroll the UIWebView (if the layer is already built), so you have to use UIWebViewDelegate to intercept the scrolling / zooming.

You will find that method swizzle it's the reason of rejection in some apps, but it has been always motivated with "you changed the behavior of an object". In this case you don't change the behavior of something, but simply intercept when a method is called.I think that you can give it a try or contact Apple Support to check if it's legal, if you are not sure.

OLD ANSWER

I'm not sure this is performance friendly enough, I tried it only with both view on the same device and it works pretty good... you should try it using Airplay.

The solution is quite simple: you take a "screenshot" of the UIWebView / MKMapView using UIGraphicsGetImageFromCurrentImageContext. You do this 30/60 times a second, and copy the result in an UIImageView (visible on the second display, you can move it wherever you want).

To detect if the view changed and avoid doing traffic on the wireless link, you can compare the two uiimages (the old frame and the new frame) byte by byte, and set the new only if it's different from the previous. (yeah, it works! ;-)

The only thing I didn't manage this evening is to make this comparison fast: if you look at the sample code attached, you'll see that the comparison is really cpu intensive (because it uses UIImagePNGRepresentation() to convert UIImage in NSData) and makes the whole app going so slow. If you don't use the comparison (copying every frame) the app is fast and smooth (at least on my iPhone 5).But I think that there are very much possibility to solve it...for example making the comparison every 4-5 frames, or optimizing the NSData creation in background

I attach a sample project: http://www.lombax.it/documents/ImageMirror.zip

In the project the frame comparison is disabled (an if commented)I attach the code here for future reference:

// here you start a timer, 50fps// the timer is started on a background thread to avoid blocking it when you scroll the webview- (IBAction)enableMirror:(id)sender {    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); //0ul --> unsigned long    dispatch_async(queue, ^{        // 0.04f --> 25 fps        NSTimer __unused *timer = [NSTimer scheduledTimerWithTimeInterval:0.02f target:self selector:@selector(copyImageIfNeeded) userInfo:nil repeats:YES];        // need to start a run loop otherwise the thread stops        CFRunLoopRun();    });}// this method create an UIImage with the content of the given view- (UIImage *) imageWithView:(UIView *)view{    UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);    [view.layer renderInContext:UIGraphicsGetCurrentContext()];    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();    UIGraphicsEndImageContext();    return img;}// the method called by the timer-(void)copyImageIfNeeded{    // this method is called from a background thread, so the code before the dispatch is executed in background    UIImage *newImage = [self imageWithView:self.webView];    // the copy is made only if the two images are really different (compared byte to byte)    // this comparison method is cpu intensive    // UNCOMMENT THE IF AND THE {} to enable the frame comparison    //if (!([self image:self.mirrorView.image isEqualTo:newImage]))    //{        // this must be called on the main queue because it updates the user interface        dispatch_queue_t queue = dispatch_get_main_queue();        dispatch_async(queue, ^{            self.mirrorView.image = newImage;        });    //}}// method to compare the two images - not performance friendly// it can be optimized, because you can "store" the old image and avoid// converting it more and more...until it's changed// you can even try to generate the nsdata in background when the frame// is created?- (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2{    NSData *data1 = UIImagePNGRepresentation(image1);    NSData *data2 = UIImagePNGRepresentation(image2);    return [data1 isEqual:data2];}


I think your idea of using CADisplayLink is good. The main problem is that you're trying to refresh every frame. You can use the frameInterval property to decrease the frame rate automatically. Alternatively, you can use the timestamp property to know when the last update happened.

Another option that might just work: to know if the layers are dirty, why don't you have an object be the delegate of all the layers, which would get its drawLayer:inContext: triggered whenever each layer needs drawing? Then just update the other layers accordingly.