Is there a method in numpy for calculating the Mean Squared Error between two matrices?
I\'ve tried searching but found none. Is it under a different name?
I
You can use:
mse = ((A - B)**2).mean(axis=ax)
Or
mse = (np.square(A - B)).mean(axis=ax)
ax=0 the average is performed along the row, for each column, returning an arrayax=1 the average is performed along the column, for each row, returning an arrayax=None the average is performed element-wise along the array, returning a scalar valueThis isn't part of numpy, but it will work with numpy.ndarray objects. A numpy.matrix can be converted to a numpy.ndarray and a numpy.ndarray can be converted to a numpy.matrix.
from sklearn.metrics import mean_squared_error
mse = mean_squared_error(A, B)
See Scikit Learn mean_squared_error for documentation on how to control axis.
Even more numpy
np.square(np.subtract(A, B)).mean()
Just for kicks
mse = (np.linalg.norm(A-B)**2)/len(A)
Another alternative to the accepted answer that avoids any issues with matrix multiplication:
def MSE(Y, YH):
return np.square(Y - YH).mean()
From the documents for np.square: "Return the element-wise square of the input."
The standard numpy methods for calculation mean squared error (variance) and its square root (standard deviation) are numpy.var() and numpy.std(), see here and here. They apply to matrices and have the same syntax as numpy.mean().
I suppose that the question and the preceding answers might have been posted before these functions became available.