vm.max_map_count and mmapfs

﹥>﹥吖頭↗ 提交于 2019-12-23 03:26:19

问题


What are the pros and cons of increasing vm.max_map_count from 64k to 256k?

Does vm.max_map_count = 65530 imply --> 64k addresses * 64kb page size = upto 4GB of data can be referenced by the process?

And if i exceed 4GB - the addressable space due to the vm.max_map_count limit, will OS need to page out some of the older accessed index data?

Maybe my above understanding is not correct as FS cache can be pretty huge

How does this limit result in OOM?

I posted a similar question on elasticsearch context at https://discuss.elastic.co/t/mmapfs-and-impact-of-vm-max-map-count/55568


回答1:


Answering my own question based on further digging and reply from Uwe Schindler - Lucene PMC

The page size has nothing to do with the max_map_count. It is the number of mappings that are allocated. Lucene's MMapDirectory maps in portions of up to 1 GiB. The number of mappings is therefor dependent on the number of segments (number of files in the index directory) and their size. A typical index with like 40 files in index directory, all of them smaller than 1 GiB needs 40 mappings. If the index is larger, has 40 files and most segments have like 20 Gigabytes, then it could take up to 800 mappings.

The reson why Elasticsearch people recommend to raise max_map_count is because of their customer structure. Most Logstash users have Elasticsearch clouds with like 10,000 indexes each possibly very large, so the number of mapping could get a limiting factor.

I'd suggest to not change the default setting, unless you get IOExceptions about "map failed" (please note: it will not result in OOMs with recent Lucene versions as this is handled internally!!!!)

The paging of the OS has nothing to do with the mapped file count. The max_map_count is just a limit on how many mappings in total can be used. A mapping needs one chunk of up to 1 GiB that is mmapped. Paging in the OS happens on a much lower level, it will swap any part according to the page size of those chunks independently: chunk != page size

Summary - Please correct me if I am wrong, unlike what the documentation suggests. Dont think it is required to increase max_map_count in all scenarios

ES 2.x - In the default (hybrid nio +mmap) FS mode only the .dvd and .tim files (maybe point too) are mmaped and that would allow for ~30000 shards per node.

ES 5.x - there is segment throttling so although default moves to mmapfs, the default of 64k may still work fine.

This could be useful if you plan to use mmapfs and have > 1000 shards per node. ( i personally see many toher issues creep in with high shards/node)

mmapfs store - only when the store is mmapfs and each node stores > 65000 segment files (or 1000+ shards) will this limit come in. I would rather add more nodes than have such massive number of shards per node on mmapfs



来源:https://stackoverflow.com/questions/38384759/vm-max-map-count-and-mmapfs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!