This is most likely something very basic, but I can\'t figure it out. Suppose that I have a Series like this:
s1 = pd.Series([1, 1, 1, 2, 2, 2, 3, 3, 3, 4,
Here's a NumPy approach using np.bincount to handle generic number of elements -
pd.Series(np.bincount(np.arange(s1.size)//3, s1))
Sample run -
In [42]: s1 = pd.Series([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 9, 5])
In [43]: pd.Series(np.bincount(np.arange(s1.size)//3, s1))
Out[43]:
0 3.0
1 6.0
2 9.0
3 12.0
4 14.0
dtype: float64
If we are really craving for performance and for case when the length of the series is divisible by the window length, we can get the view into the series with s1.values
, then reshape
and finally use np.einsum for summation, like so -
pd.Series(np.einsum('ij->i',s.values.reshape(-1,3)))
Timings with the same benchmark dataset as used in @Nickil Maveli's post -
In [140]: s = pd.Series(np.repeat(np.arange(10**5), 3))
# @Nickil Maveli's soln
In [141]: %timeit pd.Series(np.add.reduceat(s.values, np.arange(0, s.shape[0], 3)))
100 loops, best of 3: 2.07 ms per loop
# Using views+sum
In [142]: %timeit pd.Series(s.values.reshape(-1,3).sum(1))
100 loops, best of 3: 2.03 ms per loop
# Using views+einsum
In [143]: %timeit pd.Series(np.einsum('ij->i',s.values.reshape(-1,3)))
1000 loops, best of 3: 1.04 ms per loop