I\'m just getting started using Apache Spark (in Scala, but the language is irrelevant). I\'m using standalone mode and I\'ll want to process a text file f
Proper way of using is with three slashes. Two for syntax (just like http://) and one for mount point of linux file system e.g., sc.textFile(file:///home/worker/data/my_file.txt). If you are using local mode then only file is sufficient. In case of standalone cluster, the file must be copied at each node. Note that the contents of the file must be exactly same, otherwise spark returns funny results.