Spark : Read file only if the path exists

大城市里の小女人 提交于 2019-12-30 06:00:53

问题


I am trying to read the files present at Sequence of Paths in scala. Below is the sample (pseudo) code:

val paths = Seq[String] //Seq of paths
val dataframe = spark.read.parquet(paths: _*)

Now, in the above sequence, some paths exist whereas some don't. Is there any way to ignore the missing paths while reading parquet files (to avoid org.apache.spark.sql.AnalysisException: Path does not exist)?

I have tried the below and it seems to be working, but then, I end up reading the same path twice which is something I would like to avoid doing:

val filteredPaths = paths.filter(p => Try(spark.read.parquet(p)).isSuccess)

I checked the options method for DataFrameReader but that does not seem to have any option that is similar to ignore_if_missing.

Also, these paths can be hdfs or s3 (this Seq is passed as a method argument) and while reading, I don't know whether a path is s3 or hdfs so can't use s3 or hdfs specific API to check the existence.


回答1:


You can filter out the irrelevant files as in @Psidom's answer. In spark, the best way to do so is to use the internal spark hadoop configuration. Given that spark session variable is called "spark" you can do:

import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path

val hadoopfs: FileSystem = FileSystem.get(spark.sparkContext.hadoopConfiguration)

def testDirExist(path: String): Boolean = {
  val p = new Path(path)
  hadoopfs.exists(p) && hadoopfs.getFileStatus(p).isDirectory
}
val filteredPaths = paths.filter(p => testDirExists(p))
val dataframe = spark.read.parquet(filteredPaths: _*)



回答2:


How about filtering the paths firstly`:

paths.filter(f => new java.io.File(f).exists)

For instance:

Seq("/tmp", "xx").filter(f => new java.io.File(f).exists)
// res18: List[String] = List(/tmp)


来源:https://stackoverflow.com/questions/45193825/spark-read-file-only-if-the-path-exists

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!