How to read partitioned parquet with condition as dataframe,
this works fine,
val dataframe = sqlContext.read.parquet(\"file:///home/msoproj/dev_data
you need to provide mergeSchema = true option. like mentioned below (this is from 1.6.0):
val dataframe = sqlContext.read.option("mergeSchema", "true").parquet("file:///your/path/data=jDD")
This will read all the parquet files into dataframe and also creates columns year, month and day in the dataframe data.
Ref: https://spark.apache.org/docs/1.6.0/sql-programming-guide.html#schema-merging