Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()?

前端 未结 1 952
生来不讨喜
生来不讨喜 2020-12-01 04:31

I do not quite understand why numpy.linalg.solve() gives the more precise answer, whereas numpy.linalg.inv() breaks down somewhat, giving (what I b

相关标签:
1条回答
  • 2020-12-01 05:24

    np.linalg.solve(A, b) does not compute the inverse of A. Instead it calls one of the gesv LAPACK routines, which first factorizes A using LU decomposition, then solves for x using forward and backward substitution (see here).

    np.linalg.inv uses the same method to compute the inverse of A by solving for A-1 in A·A-1 = I where I is the identity*. The factorization step is exactly the same as above, but it takes more floating point operations to solve for A-1 (an n×n matrix) than for x (an n-long vector). Additionally, if you then wanted to obtain x via the identity A-1·b = x then the extra matrix multiplication would incur yet more floating point operations, and therefore slower performance and more numerical error.

    There's no need for the intermediate step of computing A-1 - it is faster and more accurate to obtain x directly.


    * The relevant bit of source for inv is here. Unfortunately it's a bit tricky to understand since it's templated C. The important thing to note is that an identity matrix is being passed to the LAPACK solver as parameter B.

    0 讨论(0)
提交回复
热议问题