Yarn parsing job logs stored in hdfs

我只是一个虾纸丫 提交于 2019-12-08 03:38:13

问题


Is there any parser, which I can use to parse the json present in yarn job logs(jhist files) which gets stored in hdfs to extract information from it.


回答1:


The second line in the .jhist file is the avro schema for the other jsons in the file. Meaning that you can create avro data out of the jhist file. For this you could use avro-tools-1.7.7.jar

# schema is the second line
sed -n '2p;3q' file.jhist > schema.avsc

# removing the first two lines
sed '1,2d' file.jhist > pfile.jhist

# finally converting to avro data
java -jar avro-tools-1.7.7.jar fromjson pfile.jhist --schema-file schema.avsc > file.avro

You've got an avro data, which you can for example import to a Hive table, and make queries on it.




回答2:


You can check out Rumen, a parsing tool from the apache ecosystem or When you visit the web UI, go to job history and look for the job for which you want to read .jhist file. Hit the Counters link at the left,now you will be able see an API which gives you all the parameters and the value like CPU time in milliseconds etc. which will read from a .jhist file itself.



来源:https://stackoverflow.com/questions/30121733/yarn-parsing-job-logs-stored-in-hdfs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!