How do I use OpenCV's remap function? How do I use OpenCV's remap function? python python

How do I use OpenCV's remap function?


This is just a simple misunderstanding of the documentation, and I don't blame you---it took me a few fumblings to understand it, too. The docs are clear, but this function probably doesn't work in the way you expect; in fact, it works in the opposite direction from what I expected at first.

What remap() doesn't do is take the coordinates of your source image, transform the points, and then interpolate. What remap() does do is, for every pixel in the destination image, lookup where it comes from in the source image, and then assigns an interpolated value. It needs to work this way since, in order to interpolate, it needs to look at the values around the source image at each pixel. Let me expand (might repeat myself a bit, but don't take it the wrong way).

From the remap() docs:

map1 – The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1 , or CV_32FC2 . See convertMaps() for details on converting a floating point representation to fixed-point for speed.

map2 – The second map of y values having the type CV_16UC1 , CV_32FC1 , or none (empty map if map1 is (x,y) points), respectively.

The verbiage here on map1 with "the first map of..." can be confusing. Remember, these are strictly the coordinates of where your image gets mapped from...the points are being mapped from src at map_x(x, y), map_y(x, y) and then placed into dst at x, y. And they should be the same shape of the image you want to warp them to. Note the equation shown in the docs:

dst(x,y) =  src(map_x(x,y),map_y(x,y))

Here map_x(x, y) is looking up map_x at the rows and columns given by x, y. Then the image value is evaluated at those pixels. It's looking up the mapped coordinates of x, y in src, and then assigning that value to x, y in dst. If you stare at this long enough, it starts to make some sense. At pixel (0, 0) in the new destination image, I look at map_x and map_y which tell me the location of the corresponding pixel in the source image, and then I can assign an interpolated value at (0, 0) in the destination image by looking at near values in the source. This is sort of the fundamental reason why remap() works this way; it needs to know where a pixel came from to get the neighboring pixels to interpolate.

Small, contrived example

img = np.uint8(np.random.rand(8, 8)*255)#array([[230,  45, 153, 233, 172, 153,  46,  29],#       [172, 209, 186,  30, 197,  30, 251, 200],#       [175, 253, 207,  71, 252,  60, 155, 124],#       [114, 154, 121, 153, 159, 224, 146,  61],#       [  6, 251, 253, 123, 200, 230,  36,  85],#       [ 10, 215,  38,   5, 119,  87,   8, 249],#       [  2,   2, 242, 119, 114,  98, 182, 219],#       [168,  91, 224,  73, 159,  55, 254, 214]], dtype=uint8)map_y = np.array([[0, 1], [2, 3]], dtype=np.float32)map_x = np.array([[5, 6], [7, 10]], dtype=np.float32)mapped_img = cv2.remap(img, map_x, map_y, cv2.INTER_LINEAR)#array([[153, 251],#       [124,   0]], dtype=uint8)

So what's happening here? In this case it's easiest to examine the matrices:

map_y=====0  12  3map_x=====5  67  10

So the destination image at (0, 0) has the same value as the source image at map_y(0, 0), map_x(0, 0) = 0, 5 and the source image at row 0 and column 5 is 153. Note that in the destination image mapped_img[0, 0] = 153. No interpolation is happening here since my map coordinates are exact integers. Also I included an out-of-bounds index (map_x[1, 1] = 10, which is larger than the image width), and notice that it just gets assigned the value 0 when it's out-of-bounds.

Full use-case example

Here's a full-fledged code example, using a ground truth homography, warping the pixel locations manually, and using remap() to then map the image from the transformed points. Note here that my homography transforms true_dst to src. Thus, I make a set of however many points I want, and then calculate where those points lie in the source image by transforming with the homography. Then remap() is used to look up those points in the source image, and map them into the destination image.

import numpy as npimport cv2# read imagestrue_dst = cv2.imread("img1.png")src = cv2.imread("img2.png")# ground truth homography from true_dst to srcH = np.array([    [8.7976964e-01,   3.1245438e-01,  -3.9430589e+01],    [-1.8389418e-01,   9.3847198e-01,   1.5315784e+02],    [1.9641425e-04,  -1.6015275e-05,   1.0000000e+00]])# create indices of the destination image and linearize themh, w = true_dst.shape[:2]indy, indx = np.indices((h, w), dtype=np.float32)lin_homg_ind = np.array([indx.ravel(), indy.ravel(), np.ones_like(indx).ravel()])# warp the coordinates of src to those of true_dstmap_ind = H.dot(lin_homg_ind)map_x, map_y = map_ind[:-1]/map_ind[-1]  # ensure homogeneitymap_x = map_x.reshape(h, w).astype(np.float32)map_y = map_y.reshape(h, w).astype(np.float32)# remap!dst = cv2.remap(src, map_x, map_y, cv2.INTER_LINEAR)blended = cv2.addWeighted(true_dst, 0.5, dst, 0.5, 0)cv2.imshow('blended.png', blended)cv2.waitKey()

Remap for warping

Images and ground truth homographies from the Visual Geometry Group at Oxford.


warped = cv.warpPerspective(img, H, (width, height))

is equivalent as

idx_pts = np.mgrid[0:width, 0:height].reshape(2, -1).Tmap_pts = transform(idx_pts, np.linalg.inv(H))map_pts = map_pts.reshape(width, height, 2).astype(np.float32)warped = cv.remap(img, map_pts, None, cv.INTER_CUBIC).transpose(1, 0, 2)

where the transform function is

def transform(src_pts, H):    # src = [src_pts 1]    src = np.pad(src_pts, [(0, 0), (0, 1)], constant_values=1)    # pts = H * src    pts = np.dot(H, src.T).T    # normalize and throw z=1    pts = (pts / pts[:, 2].reshape(-1, 1))[:, 0:2]    return pts

src_pts : [[x0, y0], [x1, y1], [x2, y2], ...] (each row is a point)
H, status = cv.findHomography(src_pts, dst_pts)