I am using Spark 1.4 for my research and struggling with the memory settings. My machine has 16GB of memory so no problem there since the size of my file is only 300MB. Alth
while starting the job or terminal, you can use
--conf spark.driver.maxResultSize="0"
to remove the bottleneck