How to find count of Null and Nan values for each column in a PySpark dataframe efficiently?

后端 未结 5 2052
广开言路
广开言路 2020-11-28 21:35
import numpy as np

df = spark.createDataFrame(
    [(1, 1, None), (1, 2, float(5)), (1, 3, np.nan), (1, 4, None), (1, 5, float(10)), (1, 6, float(\'nan\')), (1, 6,          


        
5条回答
  •  忘掉有多难
    2020-11-28 22:04

    To make sure it does not fail for string, date and timestamp columns:

    import pyspark.sql.functions as F
    def count_missings(spark_df,sort=True):
        """
        Counts number of nulls and nans in each column
        """
        df = spark_df.select([F.count(F.when(F.isnan(c) | F.isnull(c), c)).alias(c) for (c,c_type) in spark_df.dtypes if c_type not in ('timestamp', 'string', 'date')]).toPandas()
    
        if len(df) == 0:
            print("There are no any missing values!")
            return None
    
        if sort:
            return df.rename(index={0: 'count'}).T.sort_values("count",ascending=False)
    
        return df
    

    If you want to see the columns sorted based on the number of nans and nulls in descending:

    count_missings(spark_df)
    
    # | Col_A | 10 |
    # | Col_C | 2  |
    # | Col_B | 1  | 
    

    If you don't want ordering and see them as a single row:

    count_missings(spark_df, False)
    # | Col_A | Col_B | Col_C |
    # |  10   |   1   |   2   |
    

提交回复
热议问题