I am trying to get the rows with null values from a pyspark dataframe. In pandas, I can achieve this using isnull()
on the dataframe:
df = df[df.
You can filter the rows with where
, reduce
and a list comprehension. For example, given the following dataframe:
df = sc.parallelize([
(0.4, 0.3),
(None, 0.11),
(9.7, None),
(None, None)
]).toDF(["A", "B"])
df.show()
+----+----+
| A| B|
+----+----+
| 0.4| 0.3|
|null|0.11|
| 9.7|null|
|null|null|
+----+----+
Filtering the rows with some null
value could be achieved with:
import pyspark.sql.functions as f
from functools import reduce
df.where(reduce(lambda x, y: x | y, (f.col(x).isNull() for x in df.columns))).show()
Which gives:
+----+----+
| A| B|
+----+----+
|null|0.11|
| 9.7|null|
|null|null|
+----+----+
In the condition statement you have to specify if any (or, |), all (and, &), etc.