I have a spark job (for 1.4.1) receiving a stream of kafka events. I would like to save them continuously as parquet on tachyon.
val lines = KafkaUtils.creat
Spark 2.0 doesn't save metadata summaries by default any more, see SPARK-15719.
If you are working with data hosted in S3, you may still find parquet performance hit by parquet itself trying to scan the tail of all objects to check their schemas. That can be disabled explicitly
sparkConf.set("spark.sql.parquet.mergeSchema", "false")