This command works with HiveQL:
insert overwrite directory \'/data/home.csv\' select * from testtable;
But with Spark SQL I\'m getting an e
With the help of spark-csv we can write to a CSV file.
val dfsql = sqlContext.sql("select * from tablename") dfsql.write.format("com.databricks.spark.csv").option("header","true").save("output.csv")`