问题
I want to save dataframe to s3 but when I save the file to s3 , it creates empty file with ${folder_name}
, in which I want to save the file.
Syntax to save the dataframe :-
f.write.parquet("s3n://bucket-name/shri/test")
It saves the file in test folder but it creates $test
under shri
.
Is there a way I can save it without creating that extra folder?
回答1:
I was able to do it by using below code.
df.write.parquet("s3a://bucket-name/shri/test.parquet",mode="overwrite")
回答2:
As far as I know, there is no way to control the naming of the actual parquet files. When you write a dataframe to parquet, you specify what the directory name should be, and spark creates the appropriate parquet files under that directory.
来源:https://stackoverflow.com/questions/45869510/pyspark-save-dataframe-to-s3