I can load multiple files at once by passing multiple paths to the load method, e.g.
load
spark.read .format(\"com.databricks.spark.avro\") .load
You need not to create list. You can do like below
val df=spark.read.format("com.databricks.spark.avro").option("header","true").load("/data/src/entity1/*")