There was a question regarding this issue here:
Explode (transpose?) multiple columns in Spark SQL table
Suppose that we have extra columns as below:
If you want to extend the UDF for more columns, do as below:
val zip = udf((xs: Seq[String], ys: Seq[String], zs: Seq[String]) =>
for (((xs,ys),zs) <- xs zip ys zip zs) yield (xs,ys,zs))
df.withColumn("vars", explode(zip($"varA", $"varB", $"varC"))).select(
$"userId", $"someString", $"vars._1".alias("varA"),
$"vars._2".alias("varB"),$"vars._3".alias("varC")).show
This logic can be applied for n columns as required.