I\'m using Apache Spark 1.0.1. I have many files delimited with UTF8 \\u0001 and not with the usual new line \\n. How can I read such files in Spar
\\u0001
\\n
If you are using spark-context, the below code helped me sc.hadoopConfiguration.set("textinputformat.record.delimiter","delimeter")
sc.hadoopConfiguration.set("textinputformat.record.delimiter","delimeter")