reading a file in hdfs from pyspark

前端 未结 4 1645
太阳男子
太阳男子 2021-02-02 01:40

I\'m trying to read a file in my hdfs. Here\'s a showing of my hadoop file structure.

hduser@GVM:/usr/local/spark/bin$ hadoop fs -ls -R /
drwxr-xr-x   - hduser s         


        
4条回答
  •  长情又很酷
    2021-02-02 02:06

    There are two general way to read files in Spark, one for huge-distributed files to process them in parallel, one for reading small files like lookup tables and configuration on HDFS. For the latter, you might want to read a file in the driver node or workers as a single read (not a distributed read). In that case, you should use SparkFiles module like below.

    # spark is a SparkSession instance
    from pyspark import SparkFiles
    
    spark.sparkContext.addFile('hdfs:///user/bekce/myfile.json')
    with open(SparkFiles.get('myfile.json'), 'rb') as handle:
        j = json.load(handle)
        or_do_whatever_with(handle)
    

提交回复
热议问题