Elasticsearch process memory locking failed

半城伤御伤魂 提交于 2019-11-27 17:01:17

问题


I have set boostrap.memory_lock=true Updated /etc/security/limits.conf added memlock unlimited for elastic search user

My elastic search was running fine for many months. Suddenly it failed 1 day back. In logs I can see below error and process never starts

ERROR: bootstrap checks failed memory locking requested for elasticsearch process but memory is not locked

I hit ulimit -as and I can see max locked memory set to unlimited. What is going wrong here? I have been trying for hours but all in vain. Please help.

OS is RHEL 7.2 Elasticsearch 5.1.2

ulimit -as output

core file size        (blocks -c) 0
data seg size         (kbytes -d) unlimited
scheduling policy            (-e) 0
file size            (blocks, -f) unlimited
pending signals              (-i) 83552
max locked memory    (kbytes, -l) unlimited
max memory size      (kbytes, -m) unlimited
open files                   (-n) 65536
pipe size         (512 bytes, -q) 8
POSIX message queues   (bytes,-q) 819200
real-time priority           (-r) 0
stack size            kbytes, -s) 8192
cpu time             seconds, -t) unlimited
max user processes           (-u) 4096
virtual memory       (kbytes, -v) unlimited
file locks                   (-x) unlimited

回答1:


Here is what I have done to lock the memory on my ES nodes on RedHat/Centos 7 (it will work on other distributions if they use systemd).

You must make the change in 4 different places:

1) /etc/sysconfig/elasticsearch

On sysconfig: /etc/sysconfig/elasticsearch you should have:

ES_JAVA_OPTS="-Xms4g -Xmx4g" 
MAX_LOCKED_MEMORY=unlimited

(replace 4g with HALF your available RAM as recommended here)

2) /etc/security/limits.conf

On security limits config: /etc/security/limits.conf you should have

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

3) /usr/lib/systemd/system/elasticsearch.service

On the service script: /usr/lib/systemd/system/elasticsearch.service you should uncomment:

LimitMEMLOCK=infinity

you should do systemctl daemon-reload after changing the service script

4) /etc/elasticsearch/elasticsearch.yml

On elasticsearch config finally: /etc/elasticsearch/elasticsearch.yml you should add:

bootstrap.memory_lock: true

Thats it, restart your node and the RAM will be locked, you should notice a major performance improvement.




回答2:


try setting in /etc/sysconfig/elasticsearch file set MAX_LOCKED_MEMORY=unlimited

in /usr/lib/systemd/system/elasticsearch.service set LimitMEMLOCK=infinity




回答3:


OS = Ubuntu 16
ElasticSearch = 5.6.3

I also used to have the same problem.

I set in elasticsearch.yml

bootstrap.memory_lock: true

and i got in my logs:

memory locking requested for elasticsearch process but memory is not locked

i tried several things, but actually you need to do only one thing (according to https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html );

file:

/etc/systemd/system/elasticsearch.service.d/override.conf

add

[Service]
LimitMEMLOCK=infinity

A little bit explanation.

The really funny thing is that systemd does not really care about ulimit settings at all. ( https://fredrikaverpil.github.io/2016/04/27/systemd-and-resource-limits/ ). You can easily check this fact.

  1. Set in /etc/security/limits.conf

    elasticsearch - memlock unlimited

  2. check that for elasticsearch max locked memory is unlimited

    $ sudo su elasticsearch -s /bin/bash $ ulimit -l

  3. disable bootstrap.memory_lock: true in /etc/elasticsearch/elasticsearch.yml

    # bootstrap.memory_lock: true

  4. start service elasticsearch via systemd

    # service elasticsearch start

  5. check what max memory lock settings has service elasticsearch after it is started

    # systemctl show elasticsearch | grep -i limitmemlock

OMG! In spite we have set unlimited max memlock size via ulimit , systemd completely ignores it.

LimitMEMLOCK=65536

So, we come to conclusion. To start elasticsearch via systemd with enabled

bootstrap.memory_lock: true

we dont need to care about ulimit settings but we need explecitely set it in systemd config file.

the end of story.




回答4:


Make sure that your elasticsearch start process is configured to unlimited. For if e.g. you start elasticsarch with another user as the one configured in /etc/security/limits.conf or as root while defining a wildcard entry in limits.conf (which is not for root) it won't work.

Test itto be sure: you could e.g. put ulimit -a ; exit just after the "#Start Daemon" in /etc/init.d/elasticsearch and start with bash /etc/init.d/elasticsearch start (adapt accordingly to your start mechanism).




回答5:


check for the actual limit when the process is running (albeit short) with:

cat /proc/<pid>/limits

You will find lines similar to this:

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes 
<truncated>    

Then depend on the runner or container (in my case it was supervisord's minfds value), you can lift the actual limitation configuration.

I hope it gives a little hint for more general cases.




回答6:


Followed this post On ubuntu 18.04 with elasticsearch 6.x, there wasn't entry LimitMEMLOCK=infinity in file /usr/lib/systemd/system/elasticsearch.service.

So adding that in that file and setting MAX_LOCKED_MEMORY=unlimited in /etc/default/elasticsearch did the trick.

The jvm options can be added in /etc/elasticsearch/jvm.options file.



来源:https://stackoverflow.com/questions/45008355/elasticsearch-process-memory-locking-failed

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!