why are HDFS ACL max_entries set to 32?

旧时模样 提交于 2019-12-24 09:09:24

问题


In Hadoop HDFS, when you enable ACLs, I found that the max ACL entries is set to 32. I got the source code here, in org/apache/hadoop/hdfs/server/namenode/AclTransformation.java:

private static final int MAX_ENTRIES = 32;

What is the basis for this? What are the considerations? Can we change 32 to another larger number? I want to reconfigure it.


回答1:


ACLs were implemented in HDFS-4685 - Implementation of ACLs in HDFS.

As far as I can tell, there was no design decision around the limit of 32. However, since most Hadoop systems run on Linux, and this feature was inspired by Linux ACLs this value was most likely borrowed from the limits on ext3 as mentioned in POSIX Access Control Lists on Linux by Andreas Grünbacher.

The article goes on to mention that having too many ACLs create problems and also shows performance differences introduced with having ACLs enabled (See the section titled "EA and ACL Performance").



来源:https://stackoverflow.com/questions/52214397/why-are-hdfs-acl-max-entries-set-to-32

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!