how to read hdfs file with wildcard character used by pyspark

这一生的挚爱 提交于 2021-01-29 05:18:32

问题


there are some parquet file paths are:

/a/b/c='str1'/d='str'

/a/b/c='str2'/d='str'

/a/b/c='str3'/d='str'

I want to read the parquet files like this:

df = spark.read.parquet('/a/b/c='*'/d='str')

but it doesn't work by using "*" wildcard character.How can I do that? thank you for helping


回答1:


You need to escape single quotes:

df = spark.read.parquet('/a/b/c=\'*\'/d=\'str\'')

... or just use double quotes:

df = spark.read.parquet("/a/b/c='*'/d='str'")


来源:https://stackoverflow.com/questions/50312789/how-to-read-hdfs-file-with-wildcard-character-used-by-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!