I\'m trying to filter a PySpark dataframe that has None as a row value:
df.select(\'dt_mvmt\').distinct().collect()
[Row(dt_mvmt=u\'2016-03-27\
PySpark provides various filtering options based on arithmetic, logical and other conditions. Presence of NULL values can hamper further processes. Removing them or statistically imputing them could be a choice.
Below set of code can be considered:
# Dataset is df
# Column name is dt_mvmt
# Before filtering make sure you have the right count of the dataset
df.count() # Some number
# Filter here
df = df.filter(df.dt_mvmt.isNotNull())
# Check the count to ensure there are NULL values present (This is important when dealing with large dataset)
df.count() # Count should be reduced if NULL values are present