Change Block size of existing files in Hadoop

后端 未结 3 2065
情话喂你
情话喂你 2020-12-30 11:22

Consider a hadoop cluster where the default block size is 64MB in hdfs-site.xml. However, later on the team decides to change this to 128MB. Here are my questio

3条回答
  •  [愿得一人]
    2020-12-30 11:55

    As mentioned here for your point:

    1. Whenever you change a configuration, you need to restart the NameNode and DataNodes in order for them to change their behavior.
    2. No, it will not. It will keep the old block size on the old files. In order for it to take the new block change, you need to rewrite the data. You can either do a hadoop fs -cp or a distcp on your data. The new copy will have the new block size and you can delete your old data.

    check link for more information.

提交回复
热议问题