alternatives or speedups for mpmath matrix inversion

帅比萌擦擦* 提交于 2019-12-22 13:31:39

问题


I'm writing some code in python that requires frequently inverting large square matrices (100-200 rows/colums).

I'm hitting the limits of machine precision so have started trying to use mpmath to do arbitrary precision matrix inversion but it is very slow, even using gmpy.

Inverting random matrices of size 20, 30, 60 at precision 30 (decimal) takes ~ 0.19, 0.60, and 4.61 seconds whereas the same operations in mathematica take 0.0084, 0.015, and 0.055 seconds.

This is using python3 and mpmath 0.17 (not sure of gmpy version) on an arch linux machine. I'm not sure why mpmath is so much slower but is there any open source library that will approach the speeds mathematica manages for this (even 1/2 as fast would be good)?

I don't need arbitrary precision -- 128 bit would probably be good enough. I also just don't understand how mpmath can be so much slower. It must be using a very different matrix inversion algorithm. To be specific I'm using M**-1.

Is there a way to get it to use a faster algorithm or to speed it up.


回答1:


Linera algebra in mpmath is rather slow, unfortunately. There are many libraries that solve this problem much better (Sage for example). That said, as a followup to Stuart's suggestion, it is fairly easy to do reasonably fast high-precision matrix multiplication in Python without installing any libraries, using fixed-point arithmetic. Here is a version using mpmath matrices for input and output:

def fixmul(A, B, prec):
    m = A.rows; p = B.rows; n = B.cols;
    A = [[A[i,j].to_fixed(prec) for j in range(p)] for i in range(m)]
    B = [[B[i,j].to_fixed(prec) for j in range(n)] for i in range(p)]
    C = [([0] * n) for r in range(m)]
    for i in range(m):
        for j in range(n):
            s = 0
            for k in range(p):
                s += A[i][k] * B[k][j]
            C[i][j] = s
    return mp.matrix(C) * mpf(2)**(-2*prec)

At 256-bit precision, this multiplies two 200x200 matrices 16 times faster than mpmath for me. It is also not difficult to write a matrix inversion routine directly this way. Of course if the matrix entries are very large or very small, you want to rescale them first. A more solid solution would be to write your own matrix functions using the floating-point types in gmpy, which should be essentially as fast.




回答2:


I'm assuming that double precision is not a problem for the precision of the final result, but for certain matrices that it's causing an issue in intermediate results of the inverse. In that case, lets treat the result of a normal numpy (double precision) inverse as merely a good approximation, and then use this as the starting point for a few iterations of newtons method to solve for the inverse.

Let A be the matrix we're trying to invert, and X be our estimate of the inverse. An iteration of Newton's method simply consists of:

X = X*(2I - AX)

For large matrices the effort to compute a few iterations of the above is almost insignificant compared with that of finding the inverse, and it could greatly improve the accuracy of your final result. Give this a try.

BTW, I is the identity matrix in the above equation.

EDIT to add code to test precision of floating types.

Use this code to test the precision of a float type.

x = float128('1.0')
wun = x
two = wun + wun
cnt = 1
while True:
   x = x/two
   y = wun + x
   if y<=wun: break
   cnt +=1

print 'The effective number of mantissa bits is', cnt



回答3:


Multiprecision toolbox for MATLAB provides following timings using 128-bit precision (Core i7 930):

20x20 - 0.007 sec

30x30 - 0.019 sec

60x60 - 0.117 sec

200x200 - 3.2 sec

Note, these numbers is much lower for modern CPUs.



来源:https://stackoverflow.com/questions/15322686/alternatives-or-speedups-for-mpmath-matrix-inversion

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!