Sparklyr on RStudio EC2 with invoke error hadoopConfiguration standalone cluster

ⅰ亾dé卋堺 提交于 2020-01-24 13:20:48

问题


So I have a 1 master/2 slave standalone cluster on EC2. I am running rstudio from EC2 and after I run the following code:

library(aws.s3)
library(sparklyr)
library(tidyverse)
library(RCurl)

Sys.setenv("AWS_ACCESS_KEY_ID" =  "myaccesskeyid",
           "AWS_SECRET_ACCESS_KEY" = "myaccesskey",
           "SPARK_CONF_DIR" = "/home/rstudio/spark/spark-2.1.0-bin-hadoop2.7/bin/",
           "JAVA_HOME" = "/usr/lib/jvm/java-8-oracle" )

ctx <- spark_context(sc)
jsc <- invoke_static(sc, 
                    "org.apache.spark.api.java.JavaSparkContext", 
                    "fromSparkContext", ctx)

hconf <- jsc %>% invoke("hadoopConfiguration")

The last line is where I encounter an error:

Error in do.call(.f, args, envir = .env) : 
  'what' must be a function or character string

From my research, I know that invoke is how sparklyr handles Java objects and I checked and confirmed my Java was installed in master/slaves and the JAVA_HOME was set.


回答1:


When you call library(tidyverse) you see get conflicts info, which will explain what is going on:

 ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ purrr::invoke() masks sparklyr::invoke()
✖ dplyr::lag()    masks stats::lag()

As you see purrr::invoke, which has exactly the signature from the error message:

invoke(.f, .x = NULL, ..., .env = NULL)

shades sparklyr::invoke. Using fully qualified name

jsc %>% sparklyr::invoke("hadoopConfiguration")

should resolve the problem.



来源:https://stackoverflow.com/questions/51385057/sparklyr-on-rstudio-ec2-with-invoke-error-hadoopconfiguration-standalone-cluster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!