How do I determine the size of my HBase Tables ?. Is there any command to do so?

匆匆过客 提交于 2019-12-04 08:26:29

问题


I have multiple tables on my Hbase shell that I would like to copy onto my file system. Some tables exceed 100gb. However, I only have 55gb free space left in my local file system. Therefore, I would like to know the size of my hbase tables so that I could export only the small sized tables. Any suggestions are appreciated.

Thanks, gautham


回答1:


try hdfs dfs -du -h /hbase/data/default/ (or /hbase/ depending on hbase version you use)

This will show how much space is used by files of your tables.

Hope that will help.




回答2:


for 0.98+ try hadoop fs -du -s -h $hbase_root_dir/data/data/$schema_name/ (or /hbase/ for 0.94)

You can find hbase_root_dir from hbase-site.xml file of your cluster. The above command will provide you summary of disk used by each table.




回答3:


use du

Usage: hdfs dfs -du [-s] [-h] URI [URI …]

Displays sizes of files and directories contained in the given directory or the length of a file in case its just a file.

Options:

The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files.

The -h option will format file sizes in a "human-readable" fashion (e.g 64.0m instead of 67108864)

Example:

hdfs dfs -du -h /hbase/data/default

output for me:

1.2 M    /hbase/data/default/kylin_metadata
14.0 K   /hbase/data/default/kylin_metadata_acl
636      /hbase/data/default/kylin_metadata_user
5.6 K    /hbase/data/default/test


来源:https://stackoverflow.com/questions/28729257/how-do-i-determine-the-size-of-my-hbase-tables-is-there-any-command-to-do-so

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!