We have a web application deployed on a tomcat server. There are certain scheduled jobs which we run, after which the heap memory peaks up and settles down,
You can use -Xmx and -Xms settings to adjust the size of the heap. With tomcat you can set an environment variable before starting:
export JAVA_OPTS=”-Xms256m -Xmx512m”
This initially creates a heap of 256MB, with a max size of 512MB.
Some more details: http://confluence.atlassian.com/display/CONF25/Fix+'Out+of+Memory'+errors+by+increasing+available+memory
What is likely being observed is the virtual size and not the resident set size of the Java process(es)? If you have a goal for a small footprint, you may want to not include -Xms
or any minimum size on the JVM heap arguments and adjust the 70% -XX:MaxHeapFreeRatio=
to a smaller number to allow for more aggressive heap shrinkage.
In the meantime, provide more detail as to what was observed with the comment the Linux memory never decreased? What metric?
The memory allocated by the JVM process is not the same as the heap size. The used heap size could go down without an actual reduction in the space allocated by the JVM. The JVM has to receive a trigger indicating it should shrink the heap size. As @Xepoch mentions, this is controlled by -XX:MaxHeapFreeRatio
.
However the system admin is complaining that memory usage ('top' on Linux ) keeps increasing the more the scheduled jobs are [run].
That's because you very likely have some sort of memory leak. System admins tend to complain when they see processes slowly chew up more and more space.
Any ideas or suggestions would of great help?
Have you looked at the number of threads? Is you application creating its own threads and sending them off to deadlock and wait idly forever? Are you integrating with any third party APIs which may be using JNI?