问题
It looks like pd.rolling_mean
is becoming deprecated for ndarrays
,
pd.rolling_mean(x, window=2, center=False)
FutureWarning: pd.rolling_mean is deprecated for ndarrays and will be removed in a future version
but it seems to be the fastest way of doing this, according to this SO answer.
Are there now new ways of doing this directly with SciPy or NumPy that are as fast as pd.rolling_mean
?
回答1:
EDIT -- Unfortunately, it looks like the new way is not nearly as fast:
New version of Pandas:
In [1]: x = np.random.uniform(size=100)
In [2]: %timeit pd.rolling_mean(x, window=2)
1000 loops, best of 3: 240 µs per loop
In [3]: %timeit pd.Series(x).rolling(window=2).mean()
1000 loops, best of 3: 226 µs per loop
In [4]: pd.__version__
Out[4]: '0.18.0'
Old version:
In [1]: x = np.random.uniform(size=100)
In [2]: %timeit pd.rolling_mean(x,window=2)
100000 loops, best of 3: 12.4 µs per loop
In [3]: pd.__version__
Out[3]: u'0.17.1'
回答2:
Looks like the new way is via methods on the DataFrame.rolling
class (I guess you're meant to think of it sort of like a groupby
):
http://pandas.pydata.org/pandas-docs/version/0.18.0/whatsnew.html
e.g.
x.rolling(window=2).mean()
回答3:
try this
x.rolling(window=2, center=False).mean()
回答4:
I suggest scipy.ndimage.filters.uniform_filter1d like in my answer to the linked question. It is also way faster for large arrays:
import numpy as np
from scipy.ndimage.filters import uniform_filter1d
N = 1000
x = np.random.random(100000)
%timeit pd.rolling_mean(x, window=N)
__main__:257: FutureWarning: pd.rolling_mean is deprecated for ndarrays and will be removed in a future version
The slowest run took 84.55 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 7.37 ms per loop
%timeit uniform_filter1d(x, size=N)
10000 loops, best of 3: 190 µs per loop
回答5:
If your dimensions are homogeneous, you could try to implement an n-dimensional form of the Summed Area Table used for bidimensional images:
A summed area table is a data structure and algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid.
Then, in this order, you could:
- Create the summed area table ("integral") of your array;
- Iterate to get the (quite cheap) sum of a n-dimensional kernel at a given position;
- Divide by the size of the n-dimensional volume of the kernel.
Unfortunately I cannot know if this is efficient or not, but the by the given premise, it should be.
来源:https://stackoverflow.com/questions/36274447/pd-rolling-mean-becoming-deprecated-alternatives-for-ndarrays