I have a RDD that is generated using Spark. Now if I write this RDD to a csv file, I am provided with some methods like \"saveAsTextFile()\" which outputs a csv file to the
saveAsTextFile is able to take in local file system paths (e.g. file:///tmp/magic/...). However, if your running on a distributed cluster, you most likely want to collect() the data back to the cluster and then save it with standard file operations.