Read CSV file from Azure Blob storage in Rstudio Server with spark_read_csv()

☆樱花仙子☆ 提交于 2020-04-30 12:28:48

问题


I have provisioned an Azure HDInsight cluster type ML Services (R Server), operating system Linux, version ML Services 9.3 on Spark 2.2 with Java 8 HDI 3.6.

Within Rstudio Server I am trying to read in a csv file from my blob storage.

Sys.setenv(SPARK_HOME="/usr/hdp/current/spark-client")
Sys.setenv(YARN_CONF_DIR="/etc/hadoop/conf")
Sys.setenv(HADOOP_CONF_DIR="/etc/hadoop/conf")
Sys.setenv(SPARK_CONF_DIR="/etc/spark/conf")

options(rsparkling.sparklingwater.version = "2.2.28")

library(sparklyr)
library(dplyr)
library(h2o)
library(rsparkling)


sc <- spark_connect(master = "yarn-client",
                    version = "2.2.0")

origins <-file.path("wasb://MYDefaultContainer@MyStorageAccount.blob.core.windows.net",
                 "user/RevoShare")

df2 <- spark_read_csv(sc,
                 path = origins,
                 name = 'Nov-MD-Dan',
                 memory = FALSE)```

When I run this I get the following error

Error: java.lang.IllegalArgumentException: invalid method csv 
for object 235
at sparklyr.Invoke$.invoke(invoke.scala:122)
at sparklyr.StreamHandler$.handleMethodCall(stream.scala:97)
at sparklyr.StreamHandler$.read(stream.scala:62)
at sparklyr.BackendHandler.channelRead0(handler.scala:52)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at 

io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleCh   annelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
    at java.lang.Thread.run(Thread.java:748)

Any help would be awesome!


回答1:


The path origins should point to a CSV file or a directory of CSVs. Are you sure that origins points to a directory of files or a file? There's typically at least one more directory under /user/RevoShare/ for each HDFS user, i.e., /user/RevoShare/sshuser/.

Here's an example that may help:

sample_file <- file.path("/example/data/", "yellowthings.txt")

library(sparklyr)
library(dplyr)
cc <- rxSparkConnect(interop = "sparklyr")
sc <- rxGetSparklyrConnection(cc)

fruits <- spark_read_csv(sc, path = sample_file, name = "fruits", header = FALSE)

You can use RxHadoopListFiles("/example/data/") or use hdfs dfs -ls /example/data to inspect your directories on HDFS / Blob.

HTH!



来源:https://stackoverflow.com/questions/53275195/read-csv-file-from-azure-blob-storage-in-rstudio-server-with-spark-read-csv

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!