In my DataFrame, there are columns including values of null and NaN respectively, such as:
df = spark.createDataFrame([(1, float(\'nan\')), (None, 1.0)], (\
you can deal with it using this code
df = df.where(pandas.notnull(df), None)
The code will convert any NaN value into null
Below is the reffrence link
Link