I\'m trying to convert this function from Jagged Array to 2D array, and I\'m not able to convert everything Original Function:
public static double[][] Inver
NOTE: your jagged array should be orthogonal, hence sub arrays lengths should all be equal, otherwise you cannot convert it to a 2D array.
the part:
double[,] x = new double[n][];
for (int i = 0; i < n; i++)
{
//how to convert this line?
x[i] = new double[A[i].Length];
}
is just for initializing a new jagged array which can easily replaced with
double[,] x = new double[A.GetLength(0),A.GetLength(1)];
and in
//This one too?!
e = new double[A[i].Length];
you are essentially creating an array with same length of sub array i in A so we can replace it with
e = new double[A.GetLength(1)]; //NOTE: second dimension
as mentioned before, All sub arrays length are equal so we can use second dimension length instead.
and the whole method would be:
public static double[,] InvertMatrix2D(double[,] A)
{
int n = A.Length;
//e will represent each column in the identity matrix
double[] e;
//x will hold the inverse matrix to be returned
double[,] x = new double[A.GetLength(0),A.GetLength(1)];
/*
* solve will contain the vector solution for the LUP decomposition as we solve
* for each vector of x. We will combine the solutions into the double[][] array x.
* */
double[] solve;
//Get the LU matrix and P matrix (as an array)
Tuple results = LUPDecomposition(A);
double[,] LU = results.Item1;
int[] P = results.Item2;
/*
* Solve AX = e for each column ei of the identity matrix using LUP decomposition
* */
for (int i = 0; i < n; i++)
{
e = new double[A.GetLength(1)]; //NOTE: second dimension
e[i] = 1;
solve = LUPSolve(LU, P, e);
for (int j = 0; j < solve.Length; j++)
{
x[j,i] = solve[j];
}
}
return x;
}