Applying a custom groupby aggregate function to output a binary outcome in pandas python

后端 未结 2 1282
天命终不由人
天命终不由人 2020-12-28 20:06

I have a dataset of trader transactions where the variable of interest is Buy/Sell which is binary and takes on the value of 1 f the transaction was a buy and 0

相关标签:
2条回答
  • 2020-12-28 20:23

    Pandas cut() provides an improvement in @unutbu's answer by getting the result in half the time.

    def using_select(df):
        grouped = df.groupby(['Trader'])
        result = grouped['Buy/Sell'].agg(['sum', 'count'])
        means = grouped['Buy/Sell'].mean()
        result['Buy/Sell'] = np.select(condlist=[means>0.5, means<0.5], choicelist=[1, 0], 
            default=np.nan)
        return result
    
    
    def using_cut(df):
        grouped = df.groupby(['Trader'])
        result = grouped['Buy/Sell'].agg(['sum', 'count', 'mean'])
        result['Buy/Sell'] = pd.cut(result['mean'], [0, 0.5, 1], labels=[0, 1], include_lowest=True)
        result['Buy/Sell']=np.where(result['mean']==0.5,np.nan, result['Buy/Sell'])
        return result
    

    using_cut() runs in 5.21 ms average per loop in my system whereas using_select() runs in 10.4 ms average per loop.

    %timeit using_select(df)
    10.4 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
    
    %timeit using_cut(df)
    5.21 ms ± 147 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
    
    0 讨论(0)
  • 2020-12-28 20:34
    import numpy as np
    import pandas as pd
    
    df = pd.DataFrame({'Buy/Sell': [1, 0, 1, 1, 0, 1, 0, 0],
                       'Trader': ['A', 'A', 'B', 'B', 'B', 'C', 'C', 'C']})
    
    grouped = df.groupby(['Trader'])
    result = grouped['Buy/Sell'].agg(['sum', 'count'])
    means = grouped['Buy/Sell'].mean()
    result['Buy/Sell'] = np.select(condlist=[means>0.5, means<0.5], choicelist=[1, 0], 
        default=np.nan)
    print(result)
    

    yields

            Buy/Sell  sum  count
    Trader                      
    A            NaN    1      2
    B              1    2      3
    C              0    1      3
    

    My original answer used a custom aggregator, categorize:

    def categorize(x):
        m = x.mean()
        return 1 if m > 0.5 else 0 if m < 0.5 else np.nan
    result = df.groupby(['Trader'])['Buy/Sell'].agg([categorize, 'sum', 'count'])
    result = result.rename(columns={'categorize' : 'Buy/Sell'})
    

    While calling a custom function may be convenient, performance is often significantly slower when you use a custom function compared to the built-in aggregators (such as groupby/agg/mean). The built-in aggregators are Cythonized, while the custom functions reduce performance to plain Python for-loop speeds.

    The difference in speed is particularly significant when the number of groups is large. For example, with a 10000-row DataFrame with 1000 groups,

    import numpy as np
    import pandas as pd
    np.random.seed(2017)
    N = 10000
    df = pd.DataFrame({
        'Buy/Sell': np.random.randint(2, size=N),
        'Trader': np.random.randint(1000, size=N)})
    
    def using_select(df):
        grouped = df.groupby(['Trader'])
        result = grouped['Buy/Sell'].agg(['sum', 'count'])
        means = grouped['Buy/Sell'].mean()
        result['Buy/Sell'] = np.select(condlist=[means>0.5, means<0.5], choicelist=[1, 0], 
            default=np.nan)
        return result
    
    def categorize(x):
        m = x.mean()
        return 1 if m > 0.5 else 0 if m < 0.5 else np.nan
    
    def using_custom_function(df):
        result = df.groupby(['Trader'])['Buy/Sell'].agg([categorize, 'sum', 'count'])
        result = result.rename(columns={'categorize' : 'Buy/Sell'})
        return result
    

    using_select is over 50x faster than using_custom_function:

    In [69]: %timeit using_custom_function(df)
    10 loops, best of 3: 132 ms per loop
    
    In [70]: %timeit using_select(df)
    100 loops, best of 3: 2.46 ms per loop
    
    In [71]: 132/2.46
    Out[71]: 53.65853658536585
    
    0 讨论(0)
提交回复
热议问题