问题
I am running a Ubuntu instance to run a calculation of azure using a N-series instance. After the calculation I try to write to a Azure blob container using the wasb like URL
wasb://containername/path
I am trying to use the pyspark command
sparkSession.write.save('wasb://containername/path', format='json', mode='append')
But I receive a Java io exception from spark saying it doesn't support a wasb file system. I was wondering if anyone knows how to write to a wasb address while not using a HDInsight instance?
回答1:
I haven't done it with the pyspark
but here is how I did using scala and spark.
Add the dependency in sbt
"org.apache.hadoop" % "hadoop-azure" % "2.7.3"
Then define the file system to be used in the underlying Hadoop
configurations.
val spark = SparkSession.builder().appName("read azure storage").master("local[*]").getOrCreate()
spark.sparkContext.hadoopConfiguration.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
spark.sparkContext.hadoopConfiguration.set("fs.azure.account.key.yourAccount.blob.core.windows.net", "yourKey ")
val baseDir = "wasb[s]://BlobStorageContainer@yourUser.blob.core.windows.net/"
Now write the dataframe to blob container
resultDF.write.mode(SaveMode.Append).json(baseDir + outputPath)
Hope this is helpful here was the working program
来源:https://stackoverflow.com/questions/49410436/pyspark-write-to-wasb-blob-storage-container