Edge detection for image stored in matrix Edge detection for image stored in matrix numpy numpy

Edge detection for image stored in matrix


The following should hopefully be okay for your needs (or at least help). The idea is to split into the various regions using logical checks based on threshold values. The edge between these regions can then be detected using numpy roll to shift pixels in x and y and comparing to see if we are at an edge,

import matplotlib.pyplot as pltimport numpy as npimport scipy as spfrom skimage.morphology import closingthresh1 = 127thresh2 = 254#Load imageim = sp.misc.imread('jBD9j.png')#Get threashold mask for different regionsgryim = np.mean(im[:,:,0:2],2)region1 =  (thresh1<gryim)region2 =  (thresh2<gryim)nregion1 = ~ region1nregion2 = ~ region2#Plot figure and two regionsfig, axs = plt.subplots(2,2)axs[0,0].imshow(im)axs[0,1].imshow(region1)axs[1,0].imshow(region2)#Clean up any holes, etc (not needed for simple figures here)#region1 = sp.ndimage.morphology.binary_closing(region1)#region1 = sp.ndimage.morphology.binary_fill_holes(region1)#region1.astype('bool')#region2 = sp.ndimage.morphology.binary_closing(region2)#region2 = sp.ndimage.morphology.binary_fill_holes(region2)#region2.astype('bool')#Get location of edge by comparing array to it's #inverse shifted by a few pixelsshift = -2edgex1 = (region1 ^ np.roll(nregion1,shift=shift,axis=0))edgey1 = (region1 ^ np.roll(nregion1,shift=shift,axis=1))edgex2 = (region2 ^ np.roll(nregion2,shift=shift,axis=0)) edgey2 = (region2 ^ np.roll(nregion2,shift=shift,axis=1))#Plot location of edge over imageaxs[1,1].imshow(im)axs[1,1].contour(edgex1,2,colors='r',lw=2.)axs[1,1].contour(edgey1,2,colors='r',lw=2.)axs[1,1].contour(edgex2,2,colors='g',lw=2.)axs[1,1].contour(edgey2,2,colors='g',lw=2.)plt.show()

Which gives the following. For simplicity I've use roll with the inverse of each region. You could roll each successive region onto the next to detect edges

Thank you to @Kabyle for offering a reward, this is a problem that I spent a while looking for a solution to. I tried scipy skeletonize, feature.canny, topology module and openCV with limited success... This way was the most robust for my case (droplet interface tracking). Hope it helps!


There is a very simple solution to this: by definition any pixel which has both white and gray neighbors is on your "red" edge, and gray and black neighbors is on the "green" edge. The lightest/darkest neighbors are returned by the maximum/minimum filters in skimage.filters.rank, and a binary combination of masks of pixels that have a lightest/darkest neighbor which is white/gray or gray/black respectively produce the edges.

Result:

enter image description here

A worked solution:

import numpyimport skimage.filters.rankimport skimage.morphologyimport skimage.io# convert image to a uint8 image which only has 0, 128 and 255 values# the source png image provided has other levels in it so it needs to be thresholded - adjust the thresholding method for your dataimg_raw = skimage.io.imread('jBD9j.png', as_grey=True)img = numpy.zeros_like(img, dtype=numpy.uint8)img[:,:] = 128img[ img_raw < 0.25 ] = 0img[ img_raw > 0.75 ] = 255# define "next to" - this may be a square, diamond, etcselem = skimage.morphology.disk(1)# create masks for the two kinds of edgesblack_gray_edges = (skimage.filters.rank.minimum(img, selem) == 0) & (skimage.filters.rank.maximum(img, selem) == 128)gray_white_edges = (skimage.filters.rank.minimum(img, selem) == 128) & (skimage.filters.rank.maximum(img, selem) == 255)# create a color imageimg_result = numpy.dstack( [img,img,img] )# assign colors to edge masksimg_result[ black_gray_edges, : ] = numpy.asarray( [ 0, 255, 0 ] )img_result[ gray_white_edges, : ] = numpy.asarray( [ 255, 0, 0 ] )imshow(img_result)

P.S. Pixels which have black and white neighbors, or all three colors neighbors, are in an undefined category. The code above doesn't color those. You need to figure out how you want the output to be colored in those cases; but it is easy to extend the approach above to produce another mask or two for that.

P.S. The edges are two pixels wide. There is no getting around that without more information: the edges are between two areas, and you haven't defined which one of the two areas you want them to overlap in each case, so the only symmetrical solution is to overlap both areas by one pixel.

P.S. This counts the pixel itself as its own neighbor. An isolated white or black pixel on gray, or vice versa, will be considered as an edge (as well as all the pixels around it).


While plonser's answer may be rather straight forward to implement, I see it failing when it comes to sharp and thin edges. Nevertheless, I suggest you use part of his approach as preconditioning.
In a second step you want to use the Marching Squares Algorithm. According to the documentation of scikit-image, it is

a special case of the marching cubes algorithm (Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170

There even exists a Python implementation as part of the scikit-image package. I have been using this algorithm (my own Fortran implementation, though) successfully for edge detection of eye diagrams in communications engineering.

Ad 1: Preconditioning
Create a copy of your image and make it two color only, e.g. black/white. The coordinates remain the same, but you make sure that the algorithm can properly make a yes/no-decision independent from the values that you use in your matrix representation of the image.

Ad 2: Edge Detection
Wikipedia as well as various blogs provide you with a pretty elaborate description of the algorithm in various languages, so I will not go into it's details. However, let me give you some practical advice:

  1. Your image has open boundaries at the bottom. Instead of modifying the algorithm, you can artifically add another row of pixels (black or grey to bound the white/grey areas).
  2. The choice of the starting point is critical. If there are not too many images to be processed, I suggest you select it manually. Otherwise you will need to define rules. Since the Marching Squares Algorithm can start anywhere inside a bounded area, you could choose any pixel of a given color/value to detect the corresponding edge (it will initially start walking in one direction to find an edge).
  3. The algorithm returns the exact 2D positions, e.g. (x/y)-tuples. You can either
    • iterate through the list and colorize the corresponding pixels by assigning a different value or
    • create a mask to select parts of your matrix and assign the value that corresponds to a different color, e.g. green or red.

Finally: Some Post-Processing
I suggested to add an artificial boundary to the image. This has two advantages: 1. The Marching Squares Algorithm works out of the box. 2. There is no need to distinguish between image boundary and the interface between two areas within the image. Just remove the artificial boundary once you are done setting the colorful edges -- this will remove the colored lines at the boundary of the image.