I am new to spark scala and I have following situation as below I have a table \"TEST_TABLE\" on cluster(can be hive table) I am converting that to dataframe as:
<
Plain and simple:
import org.apache.spark.sql.functions._ val df = spark.table("TEST_TABLE") df.select(df.columns.map(c => max(length(col(c)))): _*)