Simulate long exposure from video frames OpenCV Simulate long exposure from video frames OpenCV ios ios

Simulate long exposure from video frames OpenCV


First of all, (probably this is not your case, as you pointed out that you are working on a video and not a camera) if you base your code on the value of the frame rate, be sure that 30fps is the effective value and not the maximum one. Sometimes cameras automatically adjust that number based on the amount of light they get from the environment. If it is dark, then the exposure time is increased and therefore the framerate is diminished.

Second point, it is really hard to simulate the real mechanism of photo exposure given a bunch of pixels.Imagine you want to double the exposure time, this should be simulated by two consecutive frames.In the real world doubling the exposure time means that the shutter speed is halved and so twice as much light hits the sensor or film, the result is a brighter image.
How do you simulate this? Consider for simplicity the case of two quite bright grayscale images you want to merge. If in a given point the pixel values are, say, 180 and 181 what is the resulting value? The first answer would be 180+181, but pixel intensities ranges between 0 and 255, so it has to be truncated at 255.The real camera with increased exposure probably would behave differently, not reaching the maximum value.

Now I’ll consider you code.
The first time you process an image (i.e. run the function), you simply store the frame in variable _exposed.
The second time you blend 29/30 of the new frame and 1/30 of the previously stored image.
The third time 29/30 of the third frame with the result of previous operation. This results in placing a fading weight on the first frame which is virtually disappeared.
The last time you call the function, again, you sum up 29/30 of the last frame and 1/30 of the previous result. In turn this means that the effect of the first frames is virtually disappeared and even the previous one counts only for a share of 29/(30x30).The image you get is just the last frame with a slight blur coming from the previous frames.
How do you obtain a simulation of exposure?If you simply want to average 30 frames you have to replace these lines:

    if (_frameCount == 0) {       _exposed = image.clone();        addWeighted(_exposed, 0.0, image, alpha, 0.0, _exposed);    } else {        addWeighted(_exposed, 1.0, image, alpha, 0.0, _exposed);    }    _frameCount++;

If you also want to make the image brighter to some extent, you could simulate it via a multiplication factor:

    if (_frameCount == 0) {       _exposed = image.clone();        addWeighted(_exposed, 0.0, image, alpha*brightfactor, 0.0, _exposed);    } else {        addWeighted(_exposed, 1.0, image, alpha*brightfactor, 0.0, _exposed);    }    _frameCount++;

Tune brightfactor to a value it best simulate a real increasing in exposure time. (EDIT: a value between 1.5 and 2.5 should do the job)


In my opinion using alpha is not the correct way.

You should accumulate the (absolute) differences from the exposure frame:

if (_frameCount == 0) {   _exposed = image.clone();} else {   _exposed += image - _exposed;}


Following approach should work in a case where

  • you have a known (or learned) background
  • you can segment the motion so that you get a mask for the foreground

Suppose you obtained such a background and can get a foreground mask for each frame that you capture after the background-learning stage. Let's denote

  • the learned background as bg
  • frame at time t as I_t
  • corresponding foreground mask for I_t as fgmask_t

Then update the background for each frame as

I_t.copyTo(bg, fgmask_t)

where copyTo is a method of OpenCV Mat class.

So the procedure would be

Learn bgfor each frame I_t{    get fgmask_t    I_t.copyTo(bg, fgmask_t)}

When frame capture is over, bg will contain the history of motion.

You can use a Gaussian Mixture Model (BackgroundSubtractorMOG variants in OpenCV) or a simple frame differencing technique for this. The quality will depend on how well the technique segments the motion (or the quality of the foreground mask).

I think this should work well for a stationary camera, but if the camera moves, it may not work very well except in a situation where the camera tracks an object.