Pandas: filling missing values by mean in each group

前端 未结 9 1091
耶瑟儿~
耶瑟儿~ 2020-11-22 06:06

This should be straightforward, but the closest thing I\'ve found is this post: pandas: Filling missing values within a group, and I still can\'t solve my problem....

<
9条回答
  •  没有蜡笔的小新
    2020-11-22 06:40

    @DSM has IMO the right answer, but I'd like to share my generalization and optimization of the question: Multiple columns to group-by and having multiple value columns:

    df = pd.DataFrame(
        {
            'category': ['X', 'X', 'X', 'X', 'X', 'X', 'Y', 'Y', 'Y'],
            'name': ['A','A', 'B','B','B','B', 'C','C','C'],
            'other_value': [10, np.nan, np.nan, 20, 30, 10, 30, np.nan, 30],
            'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3],
        }
    )
    

    ... gives ...

      category name  other_value value
    0        X    A         10.0   1.0
    1        X    A          NaN   NaN
    2        X    B          NaN   NaN
    3        X    B         20.0   2.0
    4        X    B         30.0   3.0
    5        X    B         10.0   1.0
    6        Y    C         30.0   3.0
    7        Y    C          NaN   NaN
    8        Y    C         30.0   3.0
    

    In this generalized case we would like to group by category and name, and impute only on value.

    This can be solved as follows:

    df['value'] = df.groupby(['category', 'name'])['value']\
        .transform(lambda x: x.fillna(x.mean()))
    

    Notice the column list in the group-by clause, and that we select the value column right after the group-by. This makes the transformation only be run on that particular column. You could add it to the end, but then you will run it for all columns only to throw out all but one measure column at the end. A standard SQL query planner might have been able to optimize this, but pandas (0.19.2) doesn't seem to do this.

    Performance test by increasing the dataset by doing ...

    big_df = None
    for _ in range(10000):
        if big_df is None:
            big_df = df.copy()
        else:
            big_df = pd.concat([big_df, df])
    df = big_df
    

    ... confirms that this increases the speed proportional to how many columns you don't have to impute:

    import pandas as pd
    from datetime import datetime
    
    def generate_data():
        ...
    
    t = datetime.now()
    df = generate_data()
    df['value'] = df.groupby(['category', 'name'])['value']\
        .transform(lambda x: x.fillna(x.mean()))
    print(datetime.now()-t)
    
    # 0:00:00.016012
    
    t = datetime.now()
    df = generate_data()
    df["value"] = df.groupby(['category', 'name'])\
        .transform(lambda x: x.fillna(x.mean()))['value']
    print(datetime.now()-t)
    
    # 0:00:00.030022
    

    On a final note you can generalize even further if you want to impute more than one column, but not all:

    df[['value', 'other_value']] = df.groupby(['category', 'name'])['value', 'other_value']\
        .transform(lambda x: x.fillna(x.mean()))
    

提交回复
热议问题