I\'m just getting started using Apache Spark (in Scala, but the language is irrelevant). I\'m using standalone mode and I\'ll want to process a text file f
Proper way of using is with three slashes. Two for syntax (just like http://) and one for mount point of linux file system e.g., sc.textFile(file:///home/worker/data/my_file.txt). If you are using local mode then only file is sufficient. In case of standalone cluster, the file must be copied at each node. Note that the contents of the file must be exactly same, otherwise spark returns funny results.
Each node should contain a whole file. In this case local file system will be logically indistinguishable from the HDFS, in respect to this file.
Add "file:///" uri in place of "file://". This solved the issue for me.
From Spark's FAQ page - If you don't use Hadoop/HDFS, "if you run on a cluster, you will need some form of shared file system (for example, NFS mounted at the same path on each node). If you have this type of filesystem, you can just deploy Spark in standalone mode."
https://spark.apache.org/faq.html
prepend file:// to your local file path
Spark-1.6.1
Java-1.7.0_99
Nodes in cluster-3(HDP).
Case 1:
Running in local mode local[n]
file:///.. and file:/.. reads file from local system
Case 2:
`--master yarn-cluster`
Input path does not exist: for file:/ and file://
And for file://
java.lang.IllegalArgumentException :Wrong FS: file://.. expected: file:///