Apache Artemis Master-Slave Configuration: RAM consumption increases continuously up to 100% RAM usage - after that Broker stops working

和自甴很熟 提交于 2021-01-28 12:13:04

问题


I have a problem with the artemis failover cluster (master-slave configuration) and do not find an answer what I am doing wrong.

My systems are two Ubuntu Linux 18 VM's with 4 Cores, 16GB RAM and 120 GB SSD and I use apache-artemis-2.11.0 with java version 1.8.0_111. I configured "-Xmx14G" in the artemis.profile file to give the artemis enough memory to run.

JAVA_ARGS=" -XX:+PrintClassHistogram -XX:+UseG1GC -Xms512M -Xmx14G -Dhawtio.realm=activemq  -Dhawtio.offline=true -Dhawtio.role=amq -Dhawtio.rolePrincipalClasses=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal -Djolokia.policyLocation=${ARTEMIS_INSTANCE_ETC_URI}jolokia-access.xml"

The master - slave behaviour is as expected and works good (also with retain messages), as well the publish and subscribe via tls certificates or basic authentication.

The problem is, that we transmit a lot of messages (5000) to round about 700 topic/queue per seconds and after some time (12-15h) the ram usage of the artemis master vm is 100% and the artemis master refuses any new connections. (...the slave only have about 800MB RAM usage) If I shut down the master manually, the slave jumps in.

I tried to configure the size of each queue/topic with max-size-bytes parameter and global-max-size parameter, but I do no get a better behaviour of the system.

This is my address setting:

          <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>           
            <max-size-bytes>204800</max-size-bytes> 
            <page-size-bytes>102400</page-size-bytes> 
            <page-max-cache-size>1</page-max-cache-size>  
            <default-last-value-queue>true</default-last-value-queue> 
            <default-last-value-key>_AMQ_LVQ_NAME</default-last-value-key>
            <auto-create-queues>true</auto-create-queues>
            <auto-delete-queues>true</auto-delete-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
            <redistribution-delay>0</redistribution-delay>            
         </address-setting>

...

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

The global-max-size is 1/5 of the java heap memory size: (https://activemq.apache.org/components/artemis/documentation/latest/perf-tuning.html --> Tuning the VM)

<global-max-size>3006477107</global-max-size>

Our replication mode (master) is:

<ha-policy>
    <replication>
       <master>
               <check-for-live-server>true</check-for-live-server>
       </master>
    </replication>
</ha-policy>

Currently I do not know what's the problem. Do anyone know what we/I did wrong?

来源:https://stackoverflow.com/questions/63158612/apache-artemis-master-slave-configuration-ram-consumption-increases-continuousl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!