python why use numpy.r_ instead of concatenate python why use numpy.r_ instead of concatenate numpy numpy

python why use numpy.r_ instead of concatenate


np.r_ is implemented in the numpy/lib/index_tricks.py file. This is pure Python code, with no special compiled stuff. So it is not going to be any faster than the equivalent written with concatenate, arange and linspace. It's useful only if the notation fits your way of thinking and your needs.

In your example it just saves converting the scalars to lists or arrays:

In [452]: np.r_[0.0, np.array([1,2,3,4]), 0.0]Out[452]: array([ 0.,  1.,  2.,  3.,  4.,  0.])

error with the same arguments:

In [453]: np.concatenate([0.0, np.array([1,2,3,4]), 0.0])...ValueError: zero-dimensional arrays cannot be concatenated

correct with the added []

In [454]: np.concatenate([[0.0], np.array([1,2,3,4]), [0.0]])Out[454]: array([ 0.,  1.,  2.,  3.,  4.,  0.])

hstack takes care of that by passing all arguments through [atleast_1d(_m) for _m in tup]:

In [455]: np.hstack([0.0, np.array([1,2,3,4]), 0.0])Out[455]: array([ 0.,  1.,  2.,  3.,  4.,  0.])

So at least in simple cases it is most similar to hstack.

But the real usefulness of r_ comes when you want to use ranges

np.r_[0.0, 1:5, 0.0]np.hstack([0.0, np.arange(1,5), 0.0])np.r_[0.0, slice(1,5), 0.0]

r_ lets you use the : syntax that is used in indexing. That's because it is actually an instance of a class that has a __getitem__ method. index_tricks uses this programming trick several times.

They've thrown in other bells-n-whistles

Using an imaginary step, uses np.linspace to expand the slice rather than np.arange.

np.r_[-1:1:6j, [0]*3, 5, 6]

produces:

array([-1. , -0.6, -0.2,  0.2,  0.6,  1. ,  0. ,  0. ,  0. ,  5. ,  6. ])

There are more details in the documentation.

I did some time tests for many slices in https://stackoverflow.com/a/37625115/901925


I was also interested in this question and compared the speed of

numpy.c_[a, a]numpy.stack([a, a]).Tnumpy.vstack([a, a]).Tnumpy.column_stack([a, a])numpy.concatenate([a[:,None], a[:,None]], axis=1)

which all do the same thing for any input vector a. Here's what I found (using perfplot):

enter image description here

For smaller numbers, numpy.concatenate is the winner, for larger (from about 3000) stack/vstack.


The plot was created with

import numpyimport perfplotperfplot.show(    setup=lambda n: numpy.random.rand(n),    kernels=[        lambda a: numpy.c_[a, a],        lambda a: numpy.stack([a, a]).T,        lambda a: numpy.vstack([a, a]).T,        lambda a: numpy.column_stack([a, a]),        lambda a: numpy.concatenate([a[:, None], a[:, None]], axis=1),    ],    labels=["c_", "stack", "vstack", "column_stack", "concat"],    n_range=[2 ** k for k in range(22)],    xlabel="len(a)",    logx=True,    logy=True,)


All the explanation you need:

https://sourceforge.net/p/numpy/mailman/message/13869535/

I found the most relevant part to be:

"""For r_ and c_ I'm summarizing, but effectively they seem to be doingsomething like:r_[args]:    concatenate( map(atleast_1d,args),axis=0 )c_[args]:    concatenate( map(atleast_1d,args),axis=1 )c_ behaves almost exactly like hstack -- with the addition of rangeliterals being allowed.r_ is most like vstack, but a little different since it effectivelyuses atleast_1d, instead of atleast_2d.  So you have>>> numpy.vstack((1,2,3,4))array([[1],       [2],       [3],       [4]])but>>> numpy.r_[1,2,3,4]array([1, 2, 3, 4])"""