When I store many small files into HDFS, will they get stored in a single block?
In my opinion, these small files should get stored into a single block according to
each block belongs to only one file,just do like below: 1.use fsck command to get block info of file:
hadoop fsck /gavial/data/OB/AIR/PM25/201709/01/15_00.json -files -blocks
out put like this:
/gavial/data/OB/AIR/PM25/201709/01/15_00.json 521340 bytes, 1 block(s): OK
0. BP-1004679263-192.168.130.151-1485326068364:blk_1074920015_1179253 len=521340 repl=3
Status: HEALTHY
Total size: 521340 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 1 (avg. block size 521340 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
block id is
blk_1074920015
2.use fsck command to show block status,out put like this
hdfs fsck -blockId blk_1074920015
Block Id: blk_1074920015
Block belongs to: /gavial/data/OB/AIR/PM25/201709/01/15_00.json
No. of Expected Replica: 3
No. of live Replica: 3
No. of excess Replica: 0
No. of stale Replica: 0
No. of decommission Replica: 0
No. of corrupted Replica: 0
Block replica on datanode/rack: datanode-5/default-rack is HEALTHY
Block replica on datanode/rack: datanode-1/default-rack is HEALTHY
obviously,the block belongs to only one file