OpenJDK Client VM - Cannot allocate memory

对着背影说爱祢 提交于 2020-01-02 05:28:32

问题


I am running Hadoop map reduce job on a cluster. I am getting this error.

OpenJDK Client VM warning: INFO: os::commit_memory(0x79f20000, 104861696, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (malloc) failed to allocate 104861696 bytes for committing reserved memory.

what to do ?


回答1:


make sure you have swap space on your machine

ubuntu@VM-ubuntu:~$ free -m
             total       used       free     shared    buffers     cached
Mem:           994        928         65          0          1         48
-/+ buffers/cache:        878        115
Swap:         4095       1086       3009

notice the Swap line.

I just encountered this problem on an Elastic Computing instance. Turned out swap space is not mounted by default.




回答2:


You can try to increase the memory allocation size by passing these Runtime Parameters.

For example:

java -Xms1024M -Xmx2048M -jar application.jar
  • Xmx is the maximum size
  • Xms is the minimum size



回答3:


There can be a container memory overflow with the parameters that you are using for the JVM

Check if the attributes:

yarn.nodemanager.resource.memory-mb
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb

on yarn.xml matches the desired value.

For more memory reference, read the:

HortonWorks memory reference

Similar problem

Note: This is for Hadoop 2.0, if you are running hadoop 1.0 check the Task attributes.



来源:https://stackoverflow.com/questions/26382989/openjdk-client-vm-cannot-allocate-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!