Conversion between Pillow Image object and numpy array changes dimension Conversion between Pillow Image object and numpy array changes dimension numpy numpy

Conversion between Pillow Image object and numpy array changes dimension


im maybe column-major while arrays in numpy are row-major

do in_data = in_data.T to transpose the python array

probably should check in_data with matplotlib's imshow to make sure the picture looks right.

But do you know that matplotlib comes with its own loading functions that gives you numpy arrays directly? See: http://matplotlib.org/users/image_tutorial.html


If your image is greyscale do:

in_data = in_data.T

but if you are working with rbg images you want to make sure your transpose operation is along only two axis:

in_data = np.transpose(in_data, (1,0,2))


actually this is because most image libraries give you images that are transpozed compared to numpy arrays. this is (i think) because you write image files line by line, so the first index (let's say x) refers to the line number (so x is the vertical axis) and the second index (y) refers to the subsequent pixel in line (so y is the horizontal axis), which is against our everyday coordinates sense.

If you want to handle it correctly you need to remember to write:

image = library.LoadImage(path)array = (library.FromImageToNumpyArray(image)).T

and consequently:

image = library.FromNumpyArrayToImage(array.T)library.WriteImage(image, path)

Which works also for 3D images. But i'm not promising this is the case for ALL image libraries - just these i worked with.