For example, the result of this:
df.filter(\"project = \'en\'\").select(\"title\",\"count\").groupBy(\"title\").sum()
would return an Array
Writing dataframe to disk as csv is similar read from csv. If you want your result as one file, you can use coalesce.
df.coalesce(1)
.write
.option("header","true")
.option("sep",",")
.mode("overwrite")
.csv("output/path")
If your result is an array you should use language specific solution, not spark dataframe api. Because all these kind of results return driver machine.