Matrix inversion without Numpy Matrix inversion without Numpy python python

Matrix inversion without Numpy


Here is a more elegant and scalable solution, imo. It'll work for any nxn matrix and you may find use for the other methods. Note that getMatrixInverse(m) takes in an array of arrays as input. Please feel free to ask any questions.

def transposeMatrix(m):    return map(list,zip(*m))def getMatrixMinor(m,i,j):    return [row[:j] + row[j+1:] for row in (m[:i]+m[i+1:])]def getMatrixDeternminant(m):    #base case for 2x2 matrix    if len(m) == 2:        return m[0][0]*m[1][1]-m[0][1]*m[1][0]    determinant = 0    for c in range(len(m)):        determinant += ((-1)**c)*m[0][c]*getMatrixDeternminant(getMatrixMinor(m,0,c))    return determinantdef getMatrixInverse(m):    determinant = getMatrixDeternminant(m)    #special case for 2x2 matrix:    if len(m) == 2:        return [[m[1][1]/determinant, -1*m[0][1]/determinant],                [-1*m[1][0]/determinant, m[0][0]/determinant]]    #find matrix of cofactors    cofactors = []    for r in range(len(m)):        cofactorRow = []        for c in range(len(m)):            minor = getMatrixMinor(m,r,c)            cofactorRow.append(((-1)**(r+c)) * getMatrixDeternminant(minor))        cofactors.append(cofactorRow)    cofactors = transposeMatrix(cofactors)    for r in range(len(cofactors)):        for c in range(len(cofactors)):            cofactors[r][c] = cofactors[r][c]/determinant    return cofactors


As of at least July 16, 2018 Numba has a fast matrix inverse. (You can see how they overload the standard NumPy inverse and other operations here.)

Here are the results of my benchmarking:

import numpy as npfrom scipy import linalg as slafrom scipy import linalg as nlaimport numbadef gen_ex(d0):  x = np.random.randn(d0,d0)  return x.T + x@numba.jitdef inv_nla_jit(A):  return np.linalg.inv(A)@numba.jitdef inv_sla_jit(A):  return sla.inv(A)

For small matrices it is particularly fast:

ex1 = gen_ex(4)%timeit inv_nla_jit(ex1) # NumPy + Numba%timeit inv_sla_jit(ex1) # SciPy + Numba%timeit nla.inv(ex1)     # NumPy%timeit sla.inv(ex1)     # SciPy

[Out]

2.54 µs ± 467 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)67.3 µs ± 9.18 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)63.5 µs ± 7.65 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)56.6 µs ± 5.03 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Notice that the speedup only works for NumPy inverse, not SciPy (as expected).

Slightly larger matrix:

ex2 = gen_ex(40)%timeit inv_nla_jit(ex2) # NumPy + Numba%timeit inv_sla_jit(ex2) # SciPy + Numba%timeit nla.inv(ex2)     # NumPy%timeit sla.inv(ex2)     # SciPy

[Out]

131 µs ± 12.9 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)278 µs ± 26.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)231 µs ± 24.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)189 µs ± 11.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

So there's still a speedup here but SciPy is catching up.


Here is another way, using gaussian elimination instead:

def eliminate(r1, r2, col, target=0):    fac = (r2[col]-target) / r1[col]    for i in range(len(r2)):        r2[i] -= fac * r1[i]def gauss(a):    for i in range(len(a)):        if a[i][i] == 0:            for j in range(i+1, len(a)):                if a[i][j] != 0:                    a[i], a[j] = a[j], a[i]                    break            else:                raise ValueError("Matrix is not invertible")        for j in range(i+1, len(a)):            eliminate(a[i], a[j], i)    for i in range(len(a)-1, -1, -1):        for j in range(i-1, -1, -1):            eliminate(a[i], a[j], i)    for i in range(len(a)):        eliminate(a[i], a[i], i, target=1)    return adef inverse(a):    tmp = [[] for _ in a]    for i,row in enumerate(a):        assert len(row) == len(a)        tmp[i].extend(row + [0]*i + [1] + [0]*(len(a)-i-1))    gauss(tmp)    ret = []    for i in range(len(tmp)):        ret.append(tmp[i][len(tmp[i])//2:])    return ret