numpy elementwise outer product numpy elementwise outer product numpy numpy

numpy elementwise outer product


Extend A and B to 3D keeping their first axis aligned and introducing new axes along the third and second ones respectively with None/np.newaxis and then multiply with each other. This would allow broadcasting to come into play for a vectorized solution.

Thus, an implementation would be -

A[:,:,None]*B[:,None,:]

We could shorten it a bit by using ellipsis for A's : :,: and skip listing the leftover last axis with B, like so -

A[...,None]*B[:,None]

As another vectorized approach we could also use np.einsum, which might be more intuitive once we get past the string notation syntax and consider those notations being representatives of the iterators involved in a naive loopy implementation, like so -

np.einsum('ij,ik->ijk',A,B)


Another solution using np.lib.stride_tricks.as_strided()..

Here the strategy is to, in essence, build a (100, 3, 5) array As and a (100, 3, 5) array Bs such that the normal element-wise product of these arrays will produce the desired result. Of course, we don't actually build big memory consuming arrays, thanks to as_strided(). (as_strided() is like a blueprint that tells NumPy how you'd map data from the original arrays to construct As and Bs.)

def outer_prod_stride(A, B):    """stride trick"""    a = A.shape[-1]    b = B.shape[-1]    d = A.strides[-1]    new_shape = A.shape + (b,)    As = np.lib.stride_tricks.as_strided(A, shape=new_shape, strides=(a*d, d, 0))    Bs = np.lib.stride_tricks.as_strided(B, shape=new_shape, strides=(b*d, 0, d))    return As * Bs

Timings

def outer_prod_broadcasting(A, B):    """Broadcasting trick"""    return A[...,None]*B[:,None]def outer_prod_einsum(A, B):    """einsum() trick"""    return np.einsum('ij,ik->ijk',A,B)def outer_prod_stride(A, B):    """stride trick"""    a = A.shape[-1]    b = B.shape[-1]    d = A.strides[-1]    new_shape = A.shape + (b,)    As = np.lib.stride_tricks.as_strided(A, shape=new_shape, strides=(a*d, d, 0))    Bs = np.lib.stride_tricks.as_strided(B, shape=new_shape, strides=(b*d, 0, d))    return As * Bs%timeit op1 = outer_prod_broadcasting(A, B)2.54 µs ± 436 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)%timeit op2 = outer_prod_einsum(A, B)3.03 µs ± 637 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)%timeit op3 = outer_prod_stride(A, B)16.6 µs ± 5.39 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Seems my stride trick solution is slower than both @Divkar's solutions. ..still an interesting method worth knowing though.