What are the differences between numpy arrays and matrices? Which one should I use? What are the differences between numpy arrays and matrices? Which one should I use? python python

# What are the differences between numpy arrays and matrices? Which one should I use?

Numpy matrices are strictly 2-dimensional, while numpy arrays (ndarrays) areN-dimensional. Matrix objects are a subclass of ndarray, so they inherit allthe attributes and methods of ndarrays.

The main advantage of numpy matrices is that they provide a convenient notationfor matrix multiplication: if a and b are matrices, then `a*b` is their matrixproduct.

``import numpy as npa = np.mat('4 3; 2 1')b = np.mat('1 2; 3 4')print(a)# [[4 3]#  [2 1]]print(b)# [[1 2]#  [3 4]]print(a*b)# [[13 20]#  [ 5  8]]``

On the other hand, as of Python 3.5, NumPy supports infix matrix multiplication using the `@` operator, so you can achieve the same convenience of matrix multiplication with ndarrays in Python >= 3.5.

``import numpy as npa = np.array([[4, 3], [2, 1]])b = np.array([[1, 2], [3, 4]])print(a@b)# [[13 20]#  [ 5  8]]``

Both matrix objects and ndarrays have `.T` to return the transpose, but matrixobjects also have `.H` for the conjugate transpose, and `.I` for the inverse.

In contrast, numpy arrays consistently abide by the rule that operations areapplied element-wise (except for the new `@` operator). Thus, if `a` and `b` are numpy arrays, then `a*b` is the arrayformed by multiplying the components element-wise:

``c = np.array([[4, 3], [2, 1]])d = np.array([[1, 2], [3, 4]])print(c*d)# [[4 6]#  [6 4]]``

To obtain the result of matrix multiplication, you use `np.dot` (or `@` in Python >= 3.5, as shown above):

``print(np.dot(c,d))# [[13 20]#  [ 5  8]]``

The `**` operator also behaves differently:

``print(a**2)# [[22 15]#  [10  7]]print(c**2)# [[16  9]#  [ 4  1]]``

Since `a` is a matrix, `a**2` returns the matrix product `a*a`.Since `c` is an ndarray, `c**2` returns an ndarray with each component squaredelement-wise.

There are other technical differences between matrix objects and ndarrays(having to do with `np.ravel`, item selection and sequence behavior).

The main advantage of numpy arrays is that they are more general than2-dimensional matrices. What happens when you want a 3-dimensional array? Thenyou have to use an ndarray, not a matrix object. Thus, learning to use matrixobjects is more work -- you have to learn matrix object operations, andndarray operations.

Writing a program that mixes both matrices and arrays makes your life difficultbecause you have to keep track of what type of object your variables are, lestmultiplication return something you don't expect.

In contrast, if you stick solely with ndarrays, then you can do everythingmatrix objects can do, and more, except with slightly differentfunctions/notation.

If you are willing to give up the visual appeal of NumPy matrix productnotation (which can be achieved almost as elegantly with ndarrays in Python >= 3.5), then I think NumPy arrays are definitely the way to go.

PS. Of course, you really don't have to choose one at the expense of the other,since `np.asmatrix` and `np.asarray` allow you to convert one to the other (aslong as the array is 2-dimensional).

There is a synopsis of the differences between NumPy `arrays` vs NumPy `matrix`es here.

Scipy.org recommends that you use arrays:

*'array' or 'matrix'? Which should I use? - Short answer

Use arrays.

• They are the standard vector/matrix/tensor type of numpy. Many numpy function return arrays, not matrices.

• There is a clear distinction between element-wise operations and linear algebra operations.

• You can have standard vectors or row/column vectors if you like.

The only disadvantage of using the array type is that you will have to use `dot` instead of `*` to multiply (reduce) two tensors (scalar product, matrix vector multiplication etc.).

Just to add one case to unutbu's list.

One of the biggest practical differences for me of numpy ndarrays compared to numpy matrices or matrix languages like matlab, is that the dimension is not preserved in reduce operations. Matrices are always 2d, while the mean of an array, for example, has one dimension less.

For example demean rows of a matrix or array:

with matrix

``>>> m = np.mat([[1,2],[2,3]])>>> mmatrix([[1, 2],        [2, 3]])>>> mm = m.mean(1)>>> mmmatrix([[ 1.5],        [ 2.5]])>>> mm.shape(2, 1)>>> m - mmmatrix([[-0.5,  0.5],        [-0.5,  0.5]])``

with array

``>>> a = np.array([[1,2],[2,3]])>>> aarray([[1, 2],       [2, 3]])>>> am = a.mean(1)>>> am.shape(2,)>>> amarray([ 1.5,  2.5])>>> a - am #wrongarray([[-0.5, -0.5],       [ 0.5,  0.5]])>>> a - am[:, np.newaxis]  #rightarray([[-0.5,  0.5],       [-0.5,  0.5]])``

I also think that mixing arrays and matrices gives rise to many "happy" debugging hours.However, scipy.sparse matrices are always matrices in terms of operators like multiplication.