I\'d like to drop all values from a table if the rows = nan
or 0
.
I know there\'s a way to do this using pandas i.e pandas.dropna(how
import numpy as np
a = np.array([
[1, 0, 0],
[0, np.nan, 0],
[0, 0, 0],
[np.nan, np.nan, np.nan],
[2, 3, 4]
])
mask = np.all(np.isnan(a) | np.equal(a, 0), axis=1)
a[~mask]
List comprehension can be used as a one liner.
>> a = array([65.36512 , 39.98848 , 28.25152 , 37.39968 , 59.32288 , 40.85184 ,
71.98208 , 41.7152 , 33.71776 , 38.5504 , 21.34656 , 37.97504 ,
57.5968 , 30.494656, 80.03776 , 33.94688 , 37.45792 , 27.617664,
15.59296 , 27.329984, 45.2256 , 61.27872 , 57.8848 , 87.4592 ,
34.29312 , 85.15776 , 46.37696 , 79.11616 , nan, nan])
>> np.array([i for i in a if np.isnan(i)==False])
array([65.36512 , 39.98848 , 28.25152 , 37.39968 , 59.32288 , 40.85184 ,
71.98208 , 41.7152 , 33.71776 , 38.5504 , 21.34656 , 37.97504 ,
57.5968 , 30.494656, 80.03776 , 33.94688 , 37.45792 , 27.617664,
15.59296 , 27.329984, 45.2256 , 61.27872 , 57.8848 , 87.4592 ,
34.29312 , 85.15776 , 46.37696 , 79.11616 ])
I like this approach
import numpy as np
arr = np.array([[ np.nan, np.nan],
[ -1., np.nan],
[ np.nan, -2.],
[ np.nan, np.nan],
[ np.nan, 0.]])
mask = (np.nan_to_num(arr) != 0).any(axis=1)
Out:
>>> arr[mask]
... array([[ -1., nan],
[ nan, -2.]])
This will remove all rows which are all zeros, or all nans:
mask = np.all(np.isnan(arr), axis=1) | np.all(arr == 0, axis=1)
arr = arr[~mask]
And this will remove all rows which are all either zeros or nans:
mask = np.all(np.isnan(arr) | arr == 0, axis=1)
arr = arr[~mask]
In addition: if you want to drop rows if a row has a nan or 0 in any single value
a = np.array([
[1, 0, 0],
[1, 2, np.nan],
[np.nan, np.nan, np.nan],
[2, 3, 4]
])
mask = np.any(np.isnan(a) | np.equal(a, 0), axis=1)
a[~mask]
Output
array([[ 2., 3., 4.]])