Can I stably invert a Vandermonde matrix with many small values in R?

爱⌒轻易说出口 提交于 2019-12-20 07:55:11

问题


updated on this question: I have closed this question and I will post a new question focus on R package Rmpfr.

To conclude this question and to help others, I will post my codes of the inverse of a Vandermonde Matrix from its explicit inverse formula. The generation terms are the x's in [here]1.

I am not a skilled programmer. Therefore I don't expect my codes to be the most efficient one. I post the codes here because it is better than nothing.

library(gtools)

#input is the generation vector of terms of Vandermonde matrix.
FMinv <- function(base){
  n=length(base)
  inv=matrix(nrow=n,ncol=n)
  for (i in 1:n){
    for (j in 1:n){
      if(j<n){
        a=as.matrix(combinations(n,n-j,repeats.allowed = F))
        arow.tmp=nrow(a) #this is in fact a[,1]
        b=which(a==i)%%length(a[,1])
        nrowdel=length(b)
        b=replace(b,b==0,length(a[,1]))
        a=a[-b,]
        if(arow.tmp-nrowdel>1){
          a=as.matrix(a)
          nrowa=nrow(a)
          prod=vector()
          for(k in 1:nrowa){
            prod[k]=prod(base[a[k,]])
          }
          num=sum(prod)
        }
        if(arow.tmp-nrowdel==1){
          num=prod(base[a])
        }
        den=base[i]*prod(base[-i]-base[i])
        inv[i,j]=(-1)^(j-1)*num/den
      }
      if(j==n){
        inv[i,j]=1/(base[i]*prod(base[i]-base[-i]))
      }
    }
  }
  return(inv)
}

older version

I found an explicit inversion formula.

For a 3 x 3 matrix, the formula will work. Before I try to look further into my situation, can someone help me on how to program this? Calculating it term by term is tiresome.

a <- (1 / 10) ^ c(0, 20, 30)
V <- t(outer(a, 1:(length(a)), "^"))
Vinv = matrix(nrow=3, ncol=3)
Vinv[1,1] = a[2]*a[3]/(a[1]*(a[2]-a[1])*(a[3]-a[1]))
Vinv[1,2] = -(a[2]+a[3])/(a[1]*(a[2]-a[1])*(a[3]-a[1]))
Vinv[1,3] = 1/(a[1]*(a[1]-a[2])*(a[1]-a[3]))
Vinv[2,1] = a[1]*a[3]/(a[2]*(a[1]-a[2])*(a[3]-a[2]))
Vinv[2,2] = -(a[1]+a[3])/(a[2]*(a[1]-a[2])*(a[3]-a[2]))
Vinv[2,3] = 1/(a[2]*(a[2]-a[1])*(a[2]-a[3]))
Vinv[3,1] = a[1]*a[2]/(a[3]*(a[1]-a[3])*(a[2]-a[3]))
Vinv[3,2] = -(a[1]+a[2])/(a[3]*(a[1]-a[3])*(a[2]-a[3]))
Vinv[3,3] = 1/(a[3]*(a[3]-a[1])*(a[3]-a[2]))
det(V %*% Vinv)
#[1] 1

Vi = solve(V)
#Error in solve.default(V) : 
#  Lapack routine dgesv: system is exactly singular: U[3,3] = 0

old question:

I am using R on some calculations that involves some very small values (like 1e-200).

Suppose I have a full rank n x n matrix A. I know I can find the inverse matrix by B <- solve(A, tol = 1e-320). Theoretically, det(A %*% B) should be 1. However, R will give me 0.

I am assuming this is caused by the low precision in %*%. Is there anyway to reset its precision?

c = 1/(1:50)
A = matrix(nrow = 50, ncol = 50)
for (i in 1:50){
  for (j in 1:50){
    A[i,j] = c[j] ^ (i-1)
  }
}
B = solve(A, tol = 1e-324)
det(A %*% B)
#[1] 0

回答1:


update on the explicit inverse formula of a Vandermonde matrix

The explicit inverse formula is attractive from a pure math point of view, but is not more stable than LU factorization. You still need to perform floating-point addition, so you still lose significant digits. With the 3 x 3 example matrix in my original answer, the formula needs to compute on the denominator

1 - e1,  1 - e2,  e1 - e2

And for e1 = 1e-20 and e2 = 1e-30, numerical computation of 1 - e1 and 1 - e2 are just 1.

You updated your question applying this formula to another 3 x 3 matrix, but have you not realized that numerical computation of, say

a[1] + a[2], a[2] - a[1], a[3] - a[1]

are all wrong? So the computed inverse Vinv is not analytically correct at all!

In fact, V %*% Vinv can not be correctly computed, either.

round(V %*% Vinv)
#     [,1]   [,2]  [,3]
#[1,]    1 -16384 16384
#[2,]    0      1     0
#[3,]    0      0     1

Take the (1, 2) element for example, you need to sum those values:

x <- V[1, ] * Vinv[, 2]
x
#[1] -1e-20  1e+20 -1e+20

Looks like sum(x) should be 0, but

sum(x)
#[1] -16384

sum(x * 1e-20) * 1e20
#[1] -22204.46

scal <- max(abs(x))
sum(x / scal) * scal
#[1] -11102.23

There is really no point doing all sorts of calculations when they are out of the precision of a machine as the result can just be arbitrary.


original answer on LU factorization of a Vandermonde matrix

You have a Vandermonde matrix, an infamous matrix of ill-conditioning. Let me just quickly generate a small such matrix.

a <- (1 / 10) ^ c(0, 20, 30)
V <- outer(a, 0:(length(a) - 1), "^")
#     [,1]  [,2]  [,3]
#[1,]    1 1e+00 1e+00
#[2,]    1 1e-20 1e-40
#[3,]    1 1e-30 1e-60

solve(V)

Run this solve a few times, you may get two different errors:

Error in solve.default(V) : system is computationally singular: reciprocal condition number = 1.66667e-61

Error in solve.default(V) : Lapack routine dgesv: system is exactly singular: U[3,3] = 0

Now set tol = 0,

Vi <- solve(V, tol = 0)

Run this a few times, it might give you the following Vi,

#     [,1]   [,2]   [,3]
#[1,]    0  0e+00  0e+00
#[2,]    1  1e+60 -1e+60
#[3,]    0 -1e+60  1e+60

or it might give you

Error in solve.default(V) : Lapack routine dgesv: system is exactly singular: U[3,3] = 0

solve has no numerically stable behavior here. Even if you are not stopped by an error, the resulting Vi is just numerically wrong, as Vi[1, ] is zero.

You get this, because you force a machine to do something impossible for it to do. Inverting the above V is not do-able in finite-precision floating-point computations.


Appendix: Markdown (MathJax support needed) for the picture:

Consider the LU of a Vandermonde matrix:

\begin{equation}
V = \begin{pmatrix}
1 & 1 & 1\\
1 & e_1 & e_1^2\\
1 & e_2 & e_2^2
\end{pmatrix} := LU
\end{equation}

or equivalently a Gaussian elimination $L^{-1}V = U$.

**step 0:** initialize $U = V$, $L^{-1} = I$ and $L$ as an unit lower triangular matrix.

\begin{equation}
U = \begin{pmatrix}
1 & 1 & 1\\
1 & e_1 & e_1^2\\
1 & e_2 & e_2^2
\end{pmatrix}, \quad L^{-1} = \begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}, \quad L = \begin{pmatrix}
1 & 0 & 0\\
\star & 1 & 0\\
\star & \star & 1
\end{pmatrix} 
\end{equation}

**step 1:** For $U$ and $L^{-1}$, add row 1 $\times\ (-1)$ to row 2 and row 3. For $L$, fill in $L(2,1) = L(3,1) = 1$.

\begin{equation}
U = \begin{pmatrix}
1 & 1 & 1\\
0 & e_1 - 1 & e_1^2 - 1\\
0 & e_2 - 1 & e_2^2 - 1
\end{pmatrix},\quad L^{-1} = \begin{pmatrix}
1 & 0 & 0\\
-1 & 1 & 0\\
-1 & 0 & 1
\end{pmatrix},\quad L =\begin{pmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
1 & \star & 1
\end{pmatrix}
\end{equation}

In finite-precision floating-point computations, for $e_1 < 2.22\times10^{16}$, there is just $e_1 - 1 = -1$. So when $e_1 = 10^{-20}$ and $e_2 = 10 ^{-30}$, these matrices are known to a machine as

\begin{equation}
U = \begin{pmatrix}
1 & 1 & 1\\
0 & - 1 & - 1\\
0 & - 1 & - 1
\end{pmatrix},\quad L^{-1} = \begin{pmatrix}
1 & 0 & 0\\
-1 & 1 & 0\\
-1 & 0 & 1
\end{pmatrix},\quad L =\begin{pmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
1 & \star & 1
\end{pmatrix}
\end{equation}

**step 2:** For $U$ and $L^{-1}$, add row 2 $\times\ (-1)$ to row 3. For $L$, fill in $L(3,2) = 1$.

\begin{equation}
U = \begin{pmatrix}
1 & 1 & 1\\
0 & -1 & -1\\
0 & 0 & 0
\end{pmatrix}, \quad
L^{-1} = \begin{pmatrix}
1 & 0 & 0\\
-1 & 1 & 0\\
0 & -1 & 1
\end{pmatrix},\quad L =\begin{pmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
1 & 1 & 1
\end{pmatrix}
\end{equation}

You will see that 

 - $U$ is not invertible, so $V^{-1} = U^{-1}L^{-1}$ does not exist.
 - $LU = \begin{pmatrix}1 & 1 & 1\\1 & 0 & 0\\1 & 0 & 0\end{pmatrix}\neq V$ 


来源:https://stackoverflow.com/questions/51662728/can-i-stably-invert-a-vandermonde-matrix-with-many-small-values-in-r

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!