Change Block size of existing files in Hadoop

后端 未结 3 2063
情话喂你
情话喂你 2020-12-30 11:22

Consider a hadoop cluster where the default block size is 64MB in hdfs-site.xml. However, later on the team decides to change this to 128MB. Here are my questio

3条回答
  •  情话喂你
    2020-12-30 11:56

    Will this change require restart of the cluster or it will be taken up automatically and all new files will have the default block size of 128MB

    A restart of the cluster will be required for this property change to take effect.

    What will happen to the existing files which have block size of 64M? Will the change in the configuration apply to existing files automatically?

    Existing blocks will not change their block size.

    If not automatically done, then how to manually do this block change?

    To change the existing files you can use distcp. It will copy over the files with the new block size. However, you will have to manually delete the old files with the older block size. Here's a command that you can use

    hadoop distcp -Ddfs.block.size=XX /path/to/old/files /path/to/new/files/with/larger/block/sizes.
    

提交回复
热议问题