Speeding up iterating over Numpy Arrays Speeding up iterating over Numpy Arrays numpy numpy

Speeding up iterating over Numpy Arrays


One way to speed up operations over numpy data is to use vectorize. Essentially, vectorize takes a function f and creates a new function g that maps f over an array a. g is then called like so: g(a).

>>> sqrt_vec = numpy.vectorize(lambda x: x ** 0.5)>>> sqrt_vec(numpy.arange(10))array([ 0.        ,  1.        ,  1.41421356,  1.73205081,  2.        ,        2.23606798,  2.44948974,  2.64575131,  2.82842712,  3.        ])

Without having the data you're working with available, I can't say for certain whether this will help, but perhaps you can rewrite the above as a set of functions that can be vectorized. Perhaps in this case you could vectorize over an array of indices into ReadAsArray(h,i,numberColumns, numberRows). Here's an example of the potential benefit:

>>> print setup1import numpysqrt_vec = numpy.vectorize(lambda x: x ** 0.5)>>> print setup2import numpydef sqrt_vec(a):    r = numpy.zeros(len(a))    for i in xrange(len(a)):        r[i] = a[i] ** 0.5    return r>>> timeit.timeit(stmt='a = sqrt_vec(numpy.arange(1000000))', setup=setup1, number=1)0.30318188667297363>>> timeit.timeit(stmt='a = sqrt_vec(numpy.arange(1000000))', setup=setup2, number=1)4.5400981903076172

A 15x speedup! Note also that numpy slicing handles the edges of ndarrays elegantly:

>>> a = numpy.arange(25).reshape((5, 5))>>> a[3:7, 3:7]array([[18, 19],       [23, 24]])

So if you could get your ReadAsArray data into an ndarray you wouldn't have to do any edge-checking shenanigans.


Regarding your question about reshaping -- reshaping doesn't fundamentally alter the data at all. It just changes the "strides" by which numpy indices the data. When you call the reshape method, the value returned is a new view into the data; the data isn't copied or altered at all, nor is the old view with the old stride information.

>>> a = numpy.arange(25)>>> b = a.reshape((5, 5))>>> aarray([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16,       17, 18, 19, 20, 21, 22, 23, 24])>>> barray([[ 0,  1,  2,  3,  4],       [ 5,  6,  7,  8,  9],       [10, 11, 12, 13, 14],       [15, 16, 17, 18, 19],       [20, 21, 22, 23, 24]])>>> a[5]5>>> b[1][0]5>>> a[5] = 4792>>> b[1][0]4792>>> a.strides(8,)>>> b.strides(40, 8)


Answered as requested.

If you are IO bound, you should chunk your reads/writes. Try dumping ~500 MB of data to an ndarray, process it all, write it out and then grab the next ~500 MB. Make sure to reuse the ndarray.


Without trying to completely understand exactly what you are doing, I notice that you aren't using any numpy slices or array broadcasting, both of which may speed up your code, or, at the very least, make it more readable. My apologies if these aren't germane to your problem.