Empty output when reading a csv file into Rstudio using SparkR

*爱你&永不变心* 提交于 2019-11-28 12:41:05

Pre-built Spark distributions are still built with Scala 2.10, not 2.11. So, if you use such a distribution (which I think you do), you need also a spark-csv build that is for Scala 2.10, not for Scala 2.11 (as the one you use in your code). The following code should then work fine:

 library(rJava)
 library(SparkR)
 library(nycflights13)

 df <- flights[1:4, 1:4]
 df
   year month day dep_time
 1 2013     1   1      517
 2 2013     1   1      533
 3 2013     1   1      542
 4 2013     1   1      544

 write.csv(df, file="~/scripts/temp.csv", quote=FALSE, row.names=FALSE)

 sc <- sparkR.init(sparkHome= "/usr/local/bin/spark-1.5.1-bin-hadoop2.6/", 
                   master="local",
                   sparkPackages="com.databricks:spark-csv_2.10:1.2.0")  # 2.10 here
 sqlContext <- sparkRSQL.init(sc)
 df_spark <- read.df(sqlContext, "/home/vagrant/scripts/temp.csv", "com.databricks.spark.csv", header="true")
 head(df_spark)
   year month day dep_time
 1 2013     1   1      517
 2 2013     1   1      533
 3 2013     1   1      542
 4 2013     1   1      544
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!