pyspark write to wasb blob storage container

有些话、适合烂在心里 提交于 2019-12-24 09:04:04

问题


I am running a Ubuntu instance to run a calculation of azure using a N-series instance. After the calculation I try to write to a Azure blob container using the wasb like URL

wasb://containername/path

I am trying to use the pyspark command

sparkSession.write.save('wasb://containername/path', format='json', mode='append')

But I receive a Java io exception from spark saying it doesn't support a wasb file system. I was wondering if anyone knows how to write to a wasb address while not using a HDInsight instance?


回答1:


I haven't done it with the pyspark but here is how I did using scala and spark.

Add the dependency in sbt

"org.apache.hadoop" % "hadoop-azure" % "2.7.3"

Then define the file system to be used in the underlying Hadoop configurations.

val spark = SparkSession.builder().appName("read azure storage").master("local[*]").getOrCreate()

spark.sparkContext.hadoopConfiguration.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
spark.sparkContext.hadoopConfiguration.set("fs.azure.account.key.yourAccount.blob.core.windows.net", "yourKey ")

val baseDir = "wasb[s]://BlobStorageContainer@yourUser.blob.core.windows.net/"

Now write the dataframe to blob container

resultDF.write.mode(SaveMode.Append).json(baseDir + outputPath)

Hope this is helpful here was the working program



来源:https://stackoverflow.com/questions/49410436/pyspark-write-to-wasb-blob-storage-container

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!