I have a matrix of size N*M and I want to find the mean value for each row. The values are from 1 to 5 and entries that do not have any value are set to 0. However, when I w
Get the count of non-zeros in each row and use that for averaging the summation along each row. Thus, the implementation would look something like this -
np.true_divide(matrix.sum(1),(matrix!=0).sum(1))
If you are on an older version of NumPy, you can use float conversion of the count to replace np.true_divide, like so -
matrix.sum(1)/(matrix!=0).sum(1).astype(float)
Sample run -
In [160]: matrix
Out[160]:
array([[0, 0, 1, 0, 2],
[1, 0, 0, 2, 0],
[0, 1, 1, 0, 0],
[0, 2, 2, 2, 2]])
In [161]: np.true_divide(matrix.sum(1),(matrix!=0).sum(1))
Out[161]: array([ 1.5, 1.5, 1. , 2. ])
Another way to solve the problem would be to replace zeros with NaNs and then use np.nanmean, which would ignore those NaNs and in effect those original zeros, like so -
np.nanmean(np.where(matrix!=0,matrix,np.nan),1)
From performance point of view, I would recommend the first approach.