Is there the equivalent for a `find` command in `hadoop`?

时光怂恿深爱的人放手 提交于 2019-12-04 04:47:54

hadoop fs -find was introduced in Apache Hadoop 2.7.0. Most likely you're using an older version hence you don't have it yet. see: HADOOP-8989 for more information.

In the meantime you can use

hdfs dfs -ls -R <pattern>

e.g,: hdfs dfs -ls -R /demo/order*.*

but that's not as powerful as 'find' of course and lacks some basics. From what I understand people have been writing scripts around it to get over this problem.

If you are using the Cloudera stack, try the find tool:

org.apache.solr.hadoop.HdfsFindTool

Set the command to a bash variable:

COMMAND='hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-job.jar org.apache.solr.hadoop.HdfsFindTool'

Usage as follows:

${COMMAND} -find . -name "something" -type d ...

It you don't have the cloudera parcels available you can use awk.

hdfs dfs -ls -R /some_path | awk -F / '/^d/ && (NF <= 5) && /something/' 

that's almost equivalent to the find . -type d -name "*something*" -maxdepth 4 command.

adding HdfsFindTool as alias in .bash_profile,will make it easy to use always.

--add below to profile alias hdfsfind='hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-job.jar org.apache.solr.hadoop.HdfsFindTool' alias hdfs='hadoop fs'

--u can use as follows now :(here me using find tool to get HDFS source folder wise File name and record counts.)

$> cnt=1;for ff in hdfsfind -find /dev/abc/*/2018/02/16/*.csv -type f; do pp=echo ${ff}|awk -F"/" '{print $7}';fn=basename ${ff}; fcnt=hdfs -cat ${ff}|wc -l; echo "${cnt}=${pp}=${fn}=${fcnt}"; cnt=expr ${cnt} + 1; done

--simple to get folder /file details: $> hdfsfind -find /dev/abc/ -type f -name "*.csv" $> hdfsfind -find /dev/abc/ -type d -name "toys"

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!