问题
I got this error when I tried to run topology in local mode with storm using
mvn compile exec:java -Dexec.classpathScope=compile -Dexec.mainClass=my.Topology
the error is
ERROR backtype.storm.util - Async loop died!
java.lang.OutOfMemoryError: Physical memory usage is too high: physicalBytes = 3G > maxPhysicalBytes = 3G
How can I solve it? I don't know which Physical memory I should increase ! and if I run the topology in production mode, will this error be disappeared?
UPDATE
Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 32 GB
Error Information Handle: 0x0019
Number Of Devices: 4
回答1:
I'm also using Apache Storm with JavaCV (OpenCV). I have two Topologies, and the second Topology on the process has two bolts, one for split a video into frames, and another bolt for detecting faces.
I had the same issue:
2017-08-02 11:19:18.578 o.a.s.util Thread-5-OpenCVBolt-executor[3 3] [ERROR]
Async loop died!
java.lang.OutOfMemoryError: Physical memory usage is too high: physicalBytes = 1G > maxPhysicalBytes = 1G
at org.bytedeco.javacpp.Pointer.deallocator(Pointer.java:562) ~[stormjar.jar:?]
at org.bytedeco.javacpp.helper.opencv_core$AbstractCvMemStorage.create(opencv_core.java:1649) ~[stormjar.jar:?]
at org.bytedeco.javacpp.helper.opencv_core$AbstractCvMemStorage.create(opencv_core.java:1658) ~[stormjar.jar:?]
at OpenCVBolt.detect(OpenCVBolt.java:30) ~[stormjar.jar:?]
at OpenCVBolt.execute(OpenCVBolt.java:104) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__4973$tuple_action_fn__4975.invoke(executor.clj:727) ~[storm-core-1.0.3.jar:1.0.3]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__4894.invoke(executor.clj:459) ~[storm-core-1.0.3.jar:1.0.3]
at org.apache.storm.disruptor$clojure_handler$reify__4409.onEvent(disruptor.clj:40) ~[storm-core-1.0.3.jar:1.0.3]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:453) ~[storm-core-1.0.3.jar:1.0.3]
I was able to solve it. I don't know if you are using JavaCV to work with video and images. If so, and you are working with Maven, make sure you work with JavaCV version 1.3.2 in your pom.xml:
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>javacv</artifactId>
<version>1.3.2</version>
</dependency>
Then, you need to apply the following lines in the prepare()
method in your Bolt to change the property of maxPhysicalBytes.
System.setProperty("org.bytedeco.javacpp.maxphysicalbytes", "0");
System.setProperty("org.bytedeco.javacpp.maxbytes", "0");
That's work for me. The error has disappeared. I hope this helps you.
UPDATE
@Override
public void prepare(Map conf, TopologyContext context, OutputCollector collector) {
System.setProperty("org.bytedeco.javacpp.maxphysicalbytes", "0");
System.setProperty("org.bytedeco.javacpp.maxbytes", "0");
_collector = collector;
}
回答2:
I can't find a "Physical memory usage is too high" message in the OpenJDK 8 or OpenJDK 9 codebase, so I suspectt it is coming from a native code library that is being used by Apache Storm / Spark.
If you could provide a stacktrace that could help track down the "culprit".
The following is not "evidence based" ...
I don't know which Physical memory i should increase!
It will depend on what the actual cause is. The possibilities include:
- Your Java heap is too small.
- Your JVM cannot expand the heap to the configured max for architectural reasons; e.g. your are running a 32bit JVM and that doesn't provide a large enough address space.
- The OS has refused to expand your processes memory because it doesn't have enough physical memory or swap space.
- The OS has refused to expand your processes memory because of a "ulimit" or similar resource restriction.
I would expect different diagnostics for all of the above ... except that it looks like the diagnostic (i.e. the error message) is apparently not coming from the JVM itself.
The above problems could be caused / triggered by:
- Various configurable limits could have been set too small
- Using a 32 bit JVM
- Your machine is physically too small; i.e. get more physical memory!
- Your problem is too large.
- Your application is buggy or leaking memory.
If i run the topology in production mode, will this error be disappeared?
Impossible to predict.
UPDATE - Based on the stacktrace, it is clear that the error message comes from the org.bytedeco.javacpp
library. Specifically the Pointer
class. (Sourcecode.)
Looking at the source code, the problem is related to a configurable parameter called "maxPhysicalMemory" which is configured by the "org.bytedeco.javacpp.maxphysicalbytes" system property.
Try changing that property.
You can get more info by Googling for "org.bytedeco.javacpp.maxphysicalbytes"
回答3:
I had the same issue with my while working on DL4J sentimentRNN, after making much research, I found a resource where I found that I had to add some options to the VM argument
-Xms1024m
-Xmx10g
-XX:MaxPermSize=2g
I am sure that if you adjust the value of -Xms,-Xmx or XX to your computer spec, you'll find what works for your computer memory.
I had to add this to mine vm option:
-Xms5024m -Xmx10g -XX:MaxPermSize=6g
I use a somewhat high spec PC and the options above works for me and make my code run faster.
I hope this helps
回答4:
This most probably means you are running out of space in the server. If it's a linux box do this to check available memory:-ps aux --sort -rss
This sorts the process RAM consumption by the RSS value
RSS: resident set size, the non-swapped physical memory that a task has used (in kiloBytes).
Example:-
zhossain@zhossain-linux1:~$ ps aux --sort -rss
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
zhossain 31934 98.2 8.2 1941088 1328536 pts/4 Rl+ 16:13 1:48 python /local/mnt/workspace/
root 2419 0.1 0.5 241156 88100 ? Sl Apr05 136:00 splunkd -h 127.0.0.1 -p 8089 restart
root 1544 0.1 0.3 740048 60820 ? Ssl Feb15 266:43 /usr/sbin/automount
root 2486 0.0 0.1 331680 28240 ? S 11:19 0:11 smbd -F
root 867 0.0 0.1 257000 27472 ? Ssl Feb15 0:22 rsyslogd
colord 1973 0.0 0.0 304988 13900 ? Sl Feb15 1:37 /usr/lib/colord/colord
来源:https://stackoverflow.com/questions/44598965/physical-memory-usage-is-too-high