问题
In Hadoop HDFS, when you enable ACLs, I found that the max ACL entries is set to 32. I got the source code here, in org/apache/hadoop/hdfs/server/namenode/AclTransformation.java:
private static final int MAX_ENTRIES = 32;
What is the basis for this? What are the considerations? Can we change 32 to another larger number? I want to reconfigure it.
回答1:
ACLs were implemented in HDFS-4685 - Implementation of ACLs in HDFS.
As far as I can tell, there was no design decision around the limit of 32. However, since most Hadoop systems run on Linux, and this feature was inspired by Linux ACLs this value was most likely borrowed from the limits on ext3 as mentioned in POSIX Access Control Lists on Linux by Andreas Grünbacher.
The article goes on to mention that having too many ACLs create problems and also shows performance differences introduced with having ACLs enabled (See the section titled "EA and ACL Performance").
来源:https://stackoverflow.com/questions/52214397/why-are-hdfs-acl-max-entries-set-to-32