How do I detect that two images are "the same" even if one has slightly different cropping/ratio? How do I detect that two images are "the same" even if one has slightly different cropping/ratio? ruby ruby

How do I detect that two images are "the same" even if one has slightly different cropping/ratio?


You may want to take a look at feature matching. The idea is to find features in two images and match them. This method is commonly used to find a template (say a logo) in another image. A feature, in essence, can be described as things that humans would find interesting in an image, such as corners or open spaces. There are many types of feature detection techniques out there however my recommendation is to use a scale-invariant feature transform (SIFT) as a feature detection algorithm. SIFT is invariant to image translation, scaling, rotation, partially invariant to illumination changes, and robust to local geometric distortion. This seems to match your specification where the images can have slightly different ratios.

Given your two provided images, here's an attempt to match the features using the FLANN feature matcher. To determine if the two images are the same, we can base it off some predetermined threshold which tracks the number of matches that pass the ratio test described in Distinctive Image Features from Scale-Invariant Keypoints by David G. Lowe. A simple explanation of the test is that the ratio test checks if matches are ambiguous and should be removed, you can treat it as a outlier removal technique. We can count the number of matches that pass this test to determine if the two images are the same. Here's the feature matching results:

Matches: 42

The dots represent all matches detected while the green lines represent the "good matches" that pass the ratio test. If you don't use the ratio test then all the points will be drawn. In this way, you can use this filter as a threshold to only keep the best matched features.


I implemented it in Python, I'm not very familiar with Rails. Hope this helps, good luck!

Code

import numpy as npimport cv2# Load imagesimage1 = cv2.imread('1.jpg', 0)image2 = cv2.imread('2.jpg', 0)# Create the sift objectsift = cv2.xfeatures2d.SIFT_create(700)# Find keypoints and descriptors directlykp1, des1 = sift.detectAndCompute(image2, None)kp2, des2 = sift.detectAndCompute(image1, None)# FLANN parametersFLANN_INDEX_KDTREE = 1index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)search_params = dict(checks=50)   # or pass empty dictionaryflann = cv2.FlannBasedMatcher(index_params,search_params)matches = flann.knnMatch(des1,des2,k=2)# Need to draw only good matches, so create a maskmatchesMask = [[0,0] for i in range(len(matches))]count = 0# Ratio test as per Lowe's paper (0.7)# Modify to change threshold for i,(m,n) in enumerate(matches):    if m.distance < 0.15*n.distance:        count += 1        matchesMask[i]=[1,0]# Draw linesdraw_params = dict(matchColor = (0,255,0),                   # singlePointColor = (255,0,0),                   matchesMask = matchesMask,                   flags = 0)# Display the matchesresult = cv2.drawMatchesKnn(image2,kp1,image1,kp2,matches,None,**draw_params)print('Matches:', count)cv2.imshow('result', result)cv2.waitKey()


Because ImageMagick is very old, advanced and a many-featured tool, it would be difficult to build an interface that covers most of the features. As great as it is, rmagick does not (and neither do the many attempts python has taken) come close to covering all of the features.

I imagine for many use cases, it'll be safe-enough and much easier to just execute a command line method and read from that. In ruby that'll look like this;

require 'open3'def check_subimage(large, small)    stdin, stdout, stderr, wait_thr = Open3.popen3("magick compare -subimage-search -metric RMSE #{large} #{small} temp.jpg")    result = stderr.gets    stderr.close    stdout.close    return result.split[1][1..-2].to_f < 0.2endif check_subimage('a.jpg', 'b.jpg')    puts "b is a crop of a"else    puts "b is not a crop of a"end

I'll cover important stuff and then talk about additional notes.

The command uses magick compare to check if the second image (small) is a subimage of the first (large). This function does not check that small is strictly smaller than large (both height and width). The number I put for the similarity is 0.2 (20% error), and the value for the images you provided is about 0.15. You may want to fine tune this! I find that images that are a strict subset get less than 0.01.

  • If you want less error(smaller numbers) on cases where you have 90% overlap but the second image has some extra stuff the first one doesn't, you can run it once, then crop the first large image to where the subimage is contained, then run it again with the cropped image as the "small" one and the original "small" image as the large one.
  • If you really wanted a nice object oriented interface in Ruby, rmagick uses the MagicCore API. This (link to docs) command is probably what you want to use to implement it, and you can open a pr to rmagick or package the cext yourself.
  • Using open3 will start a thread (see docs). Closing stderr and stdout is not "necessary" but you're supposed to.
  • The "temp" image that's the third arg specifies a file to output an analysis onto. With a quick look, I couldn't find a way not to require it, but it does just overwrite automatically and could be good to save for debugging. For your example, it would look like this;

enter image description here

  • The full output is in the format of 10092.6 (0.154003) @ 0,31. The first number is the rmse value out of 655535, the second one (which I use) is normalized percentage. The last two numbers represent the location of the original image from which the small image begins.
  • Since there is not an objective source of truth for how "similar" images are, I picked RMSE (see more metric options here). It's a fairly common measure of differences between values. An Absolute Error count (AE) might seem like a good idea, however it seems some cropping software does not perfectly preserve pixels so you might have to adjust fuzz and it's not a normalized value, so then you'd have to compare the error count with the size of the image and whatnot.


Usually template matching have a good result in these situations. Template matching is a technique for finding areas of an image that match (are similar) to a template image (second image). This algorithm gives a score for the best macthed position in the source image (the second one).

In opencv using TM_CCOEFF_NORMED method, gives the score between 0 and 1. If the score is 1, that means the template image is exactly a part (Rect) of the source image, but if you have a little change in the lightening or perspective between the two image, the score would be lower than 1.

Now By considering a threshold for the similarity score, you can find out if they are the same or not. That threshold can be obtained by some trial and error on a few sample images. I tried your images and got the score 0.823863. Here is the code (opencv C++) and the common area between the two images, obtained by the matching:

enter image description here

Mat im2 = imread("E:/1/1.jpg", 1);//Mat im2;// = imread("E:/1/1.jpg", 1);Mat im1 = imread("E:/1/2.jpg", 1);//im1(Rect(0, 0, im1.cols - 5, im1.rows - 5)).copyTo(im2);int result_cols = im1.cols - im2.cols + 1;int result_rows = im1.rows - im2.rows + 1;Mat result = Mat::zeros(result_rows, result_cols, CV_32FC1);matchTemplate(im1, im2, result, TM_CCOEFF_NORMED);double minVal; double maxVal;Point minLoc; Point maxLoc;Point matchLoc;minMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, Mat());cout << minVal << " " << maxVal << " " << minLoc << " " << maxLoc << "\n";matchLoc = maxLoc;rectangle(im1, matchLoc, Point(matchLoc.x + im2.cols, matchLoc.y + im2.rows), Scalar::all(0), 2, 8, 0);rectangle(result, matchLoc, Point(matchLoc.x + im2.cols, matchLoc.y + im2.rows), Scalar::all(0), 2, 8, 0);imshow("1", im1);imshow("2", result);waitKey(0);