Spark: write a CSV with null values as empty columns

被刻印的时光 ゝ 提交于 2020-01-25 09:48:11

问题


I'm using PySpark to write a dataframe to a CSV file like this:

df.write.csv(PATH, nullValue='')

There is a column in that dataframe of type string. Some of the values are null. These null values display like this:

...,"",...

I would like them to be display like this instead:

...,,...

Is this possible with an option in csv.write()?

Thanks!


回答1:


Easily with emptyValue option setted

emptyValue: sets the string representation of an empty value. If None is set, it use the default value, "".

from pyspark import Row
from pyspark.shell import spark

df = spark.createDataFrame([
    Row(col_1=None, col_2='20151231', col_3='Hello'),
    Row(col_1=2, col_2='20160101', col_3=None),
    Row(col_1=3, col_2=None, col_3='World')
])

df.write.csv(PATH, header=True, emptyValue='')

Output

col_1,col_2,col_3
,20151231,Hello
2,20160101,
3,,World


来源:https://stackoverflow.com/questions/57726576/spark-write-a-csv-with-null-values-as-empty-columns

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!