Mapping a NumPy array in place Mapping a NumPy array in place python python

Mapping a NumPy array in place


It's only worth trying to do this in-place if you are under significant space constraints. If that's the case, it is possible to speed up your code a little bit by iterating over a flattened view of the array. Since reshape returns a new view when possible, the data itself isn't copied (unless the original has unusual structure).

I don't know of a better way to achieve bona fide in-place application of an arbitrary Python function.

>>> def flat_for(a, f):...     a = a.reshape(-1)...     for i, v in enumerate(a):...         a[i] = f(v)... >>> a = numpy.arange(25).reshape(5, 5)>>> flat_for(a, lambda x: x + 5)>>> aarray([[ 5,  6,  7,  8,  9],       [10, 11, 12, 13, 14],       [15, 16, 17, 18, 19],       [20, 21, 22, 23, 24],       [25, 26, 27, 28, 29]])

Some timings:

>>> a = numpy.arange(2500).reshape(50, 50)>>> f = lambda x: x + 5>>> %timeit flat_for(a, f)1000 loops, best of 3: 1.86 ms per loop

It's about twice as fast as the nested loop version:

>>> a = numpy.arange(2500).reshape(50, 50)>>> def nested_for(a, f):...     for i in range(len(a)):...         for j in range(len(a[0])):...             a[i][j] = f(a[i][j])... >>> %timeit nested_for(a, f)100 loops, best of 3: 3.79 ms per loop

Of course vectorize is still faster, so if you can make a copy, use that:

>>> a = numpy.arange(2500).reshape(50, 50)>>> g = numpy.vectorize(lambda x: x + 5)>>> %timeit g(a)1000 loops, best of 3: 584 us per loop

And if you can rewrite dim using built-in ufuncs, then please, please, don't vectorize:

>>> a = numpy.arange(2500).reshape(50, 50)>>> %timeit a + 5100000 loops, best of 3: 4.66 us per loop

numpy does operations like += in place, just as you might expect -- so you can get the speed of a ufunc with in-place application at no cost. Sometimes it's even faster! See here for an example.


By the way, my original answer to this question, which can be viewed in its edit history, is ridiculous, and involved vectorizing over indices into a. Not only did it have to do some funky stuff to bypass vectorize's type-detection mechanism, it turned out to be just as slow as the nested loop version. So much for cleverness!


This is a write-up of contributions scattered in answers and comments, that I wrote after accepting the answer to the question. Upvotes are always welcome, but if you upvote this answer, please don't miss to upvote also those of senderle and (if (s)he writes one) eryksun, who suggested the methods below.

Q: Is it possible to map a numpy array in place?
A: Yes but not with a single array method. You have to write your own code.

Below a script that compares the various implementations discussed in the thread:

import timeitfrom numpy import array, arange, vectorize, rint# SETUPget_array = lambda side : arange(side**2).reshape(side, side) * 30dim = lambda x : int(round(x * 0.67328))# TIMERdef best(fname, reps, side):    global a    a = get_array(side)        t = timeit.Timer('%s(a)' % fname,                     setup='from __main__ import %s, a' % fname)    return min(t.repeat(reps, 3))  #low num as in place --> converge to 1# FUNCTIONSdef mac(array_):    for row in range(len(array_)):        for col in range(len(array_[0])):            array_[row][col] = dim(array_[row][col])def mac_two(array_):    li = range(len(array_[0]))    for row in range(len(array_)):        for col in li:            array_[row][col] = int(round(array_[row][col] * 0.67328))def mac_three(array_):    for i, row in enumerate(array_):        array_[i][:] = [int(round(v * 0.67328)) for v in row]def senderle(array_):    array_ = array_.reshape(-1)    for i, v in enumerate(array_):        array_[i] = dim(v)def eryksun(array_):    array_[:] = vectorize(dim)(array_)def ufunc_ed(array_):    multiplied = array_ * 0.67328    array_[:] = rint(multiplied)# MAINr = []for fname in ('mac', 'mac_two', 'mac_three', 'senderle', 'eryksun', 'ufunc_ed'):    print('\nTesting `%s`...' % fname)    r.append(best(fname, reps=50, side=50))    # The following is for visually checking the functions returns same results    tmp = get_array(3)    eval('%s(tmp)' % fname)    print tmptmp = min(r)/100print('\n===== ...AND THE WINNER IS... =========================')print('  mac (as in question)       :  %.4fms [%.0f%%]') % (r[0]*1000,r[0]/tmp)print('  mac (optimised)            :  %.4fms [%.0f%%]') % (r[1]*1000,r[1]/tmp)print('  mac (slice-assignment)     :  %.4fms [%.0f%%]') % (r[2]*1000,r[2]/tmp)print('  senderle                   :  %.4fms [%.0f%%]') % (r[3]*1000,r[3]/tmp)print('  eryksun                    :  %.4fms [%.0f%%]') % (r[4]*1000,r[4]/tmp)print('  slice-assignment w/ ufunc  :  %.4fms [%.0f%%]') % (r[5]*1000,r[5]/tmp)print('=======================================================\n')

The output of the above script - at least in my system - is:

  mac (as in question)       :  88.7411ms [74591%]  mac (optimised)            :  86.4639ms [72677%]  mac (slice-assignment)     :  79.8671ms [67132%]  senderle                   :  85.4590ms [71832%]  eryksun                    :  13.8662ms [11655%]  slice-assignment w/ ufunc  :  0.1190ms [100%]

As you can observe, using numpy's ufunc increases speed of more than 2 and almost 3 orders of magnitude compared with the second best and worst alternatives respectively.

If using ufunc is not an option, here's a comparison of the other alternatives only:

  mac (as in question)       :  91.5761ms [672%]  mac (optimised)            :  88.9449ms [653%]  mac (slice-assignment)     :  80.1032ms [588%]  senderle                   :  86.3919ms [634%]  eryksun                    :  13.6259ms [100%]

HTH!


Why not using numpy implementation, and the out_ trick ?

from numpy import array, arange, vectorize, rint, multiply, round as np_round def fmilo(array_):    np_round(multiply(array_ ,0.67328, array_), out=array_)

got:

===== ...AND THE WINNER IS... =========================  mac (as in question)       :  80.8470ms [130422%]  mac (optimised)            :  80.2400ms [129443%]  mac (slice-assignment)     :  75.5181ms [121825%]  senderle                   :  78.9380ms [127342%]  eryksun                    :  11.0800ms [17874%]  slice-assignment w/ ufunc  :  0.0899ms [145%]  fmilo                      :  0.0620ms [100%]=======================================================