How to read parquet files using `ssc.fileStream()`? What are the types passed to `ssc.fileStream()`?

前端 未结 2 1522
执念已碎
执念已碎 2020-12-16 05:31

My understanding of Spark\'s fileStream() method is that it takes three types as parameters: Key, Value, and Format. In c

2条回答
  •  太阳男子
    2020-12-16 05:55

    You can access the parquet by adding some parquet specific hadoop settings :

    val ssc = new StreamingContext(conf, Seconds(5))
    var schema =StructType(Seq(
          StructField("a", StringType, nullable = false),
          ........
    
         ))
    val schemaJson=schema.json
    
    val fileDir="/tmp/fileDir"
    ssc.sparkContext.hadoopConfiguration.set("parquet.read.support.class", "org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport")  ssc.sparkContext.hadoopConfiguration.set("org.apache.spark.sql.parquet.row.requested_schema", schemaJson)
    ssc.sparkContext.hadoopConfiguration.set(SQLConf.PARQUET_BINARY_AS_STRING.key, "false")
    ssc.sparkContext.hadoopConfiguration.set(SQLConf.PARQUET_INT96_AS_TIMESTAMP.key, "false")
    ssc.sparkContext.hadoopConfiguration.set(SQLConf.PARQUET_WRITE_LEGACY_FORMAT.key, "false")
    ssc.sparkContext.hadoopConfiguration.set(SQLConf.PARQUET_BINARY_AS_STRING.key, "false")
    
    val streamRdd = ssc.fileStream[Void, UnsafeRow, ParquetInputFormat[UnsafeRow]](fileDir,(t: org.apache.hadoop.fs.Path) => true, false)
    
    streamRdd.count().print()
    
    ssc.start()
    ssc.awaitTermination()
    

    This code was prepared with Spark 2.1.0.

提交回复
热议问题