We are running spark 2.3.0 on AWS EMR. The following DataFrame \"df\" is non empty and of modest size:
scala> df.co
This error usually occurs when you try to read an empty directory as parquet. You could check 1. if the DataFrame is empty with outcome.rdd.isEmpty() before write it. 2. Check the if the path you are giving is correct
Also in what mode you are running your application? Try running it in client mode if you are running in cluster mode.