How to find count of Null and Nan values for each column in a PySpark dataframe efficiently?

柔情痞子 提交于 2019-11-28 04:43:38

You can use method shown here and replace isNull with isnan:

from pyspark.sql.functions import isnan, when, count, col

df.select([count(when(isnan(c), c)).alias(c) for c in df.columns]).show()
+-------+----------+---+
|session|timestamp1|id2|
+-------+----------+---+
|      0|         0|  3|
+-------+----------+---+

or

df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()
+-------+----------+---+
|session|timestamp1|id2|
+-------+----------+---+
|      0|         0|  5|
+-------+----------+---+

You can create an UDF to ckeck both null and NaN and return the boolean value to filter

The code is scala code hope you can convert to python.

val isNaN = udf((value : Float) => {
   if (value.equals(Float.NaN) || value == null) true else false }) 

val result = data.filter(isNaN(data("column2"))).count()

Hope this helps !

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!