I am trying to read Parquet files without using Apache Spark and I am able to do it but I am finding it hard to read specific columns. I am not able to find any good resource of Google as almost all the post is about reading the parquet file using. Below is my code:
import org.apache.hadoop.fs.{FileSystem, Path} import org.apache.avro.generic.GenericRecord import org.apache.parquet.hadoop.ParquetReader import org.apache.parquet.avro.AvroParquetReader object parquetToJson{ def main (args : Array[String]):Unit= { //case class Customer(key: Int, name: String, sellAmount: Double, profit: Double, state:String) val parquetFilePath = new Path("data/parquet/Customer/") val reader = AvroParquetReader.builder[GenericRecord](parquetFilePath).build()//.asInstanceOf[ParquetReader[GenericRecord]] val iter = Iterator.continually(reader.read).takeWhile(_ != null) val list = iter.toList list.foreach(record => println(record)) } }
The commented out case class represents the schema of my file and write now the above code reads all the columns from the file. I want to read specific columns.