问题
I have the sql table on the databricks created using the following code
%sql
CREATE TABLE data
USING CSV
OPTIONS (header "true", inferSchema "true")
LOCATION "url/data.csv"
The following code converts that table to sparkr and r dataframe, respectively:
%r
library(SparkR)
data_spark <- sql("SELECT * FROM data")
data_r_df <- as.data.frame(data_spark)
But I don't know how should I convert any or all of these dataframes into sparklyr dataframe to leverage parallelization of sparklyr?
回答1:
Just
sc <- spark_connect(...)
data_spark <- dplyr::tbl(sc, "data")
or
sc %>% spark_session() %>% invoke("sql", "SELECT * FROM data") %>% sdf_register()
来源:https://stackoverflow.com/questions/51504713/sql-sparklyr-sparkr-dataframe-conversions-on-databricks