This command works with HiveQL:
insert overwrite directory \'/data/home.csv\' select * from testtable;
But with Spark SQL I\'m getting an e
The error message suggests this is not a supported feature in the query language. But you can save a DataFrame in any format as usual through the RDD interface (df.rdd.saveAsTextFile). Or you can check out https://github.com/databricks/spark-csv.
df.rdd.saveAsTextFile