Cassandra: NoSpamLogger log Maximum memory usage reached

安稳与你 提交于 2021-01-27 04:22:36

问题


Every 15 min I see this log entry:

2018-07-30 10:29:57,529 INFO [pool-1-thread-2] NoSpamLogger.java:91 log Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB

I´ve been reading through this Question, but I cant see anything wrong with my tables: NoSpamLogger.java Maximum memory usage reached Cassandra

I have 4 large tables:

iot_data/derived_device_data histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00              0.00              0.00               642                12
75%             0.00              0.00              0.00             17084               642
95%             0.00              0.00              0.00            263210             11864
98%             0.00              0.00              0.00           1629722             61214
99%             0.00              0.00              0.00           1955666             88148
Min             0.00              0.00              0.00               150                 0
Max             0.00              0.00              0.00           4055269            152321


iot_data/derived_device_data_by_year histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00              0.00              0.00             51012              1597
75%             0.00              0.00              0.00           2346799             61214
95%             0.00              0.00              0.00          52066354           1629722
98%             0.00              0.00              0.00          52066354           1629722
99%             0.00              0.00              0.00          52066354           1629722
Min             0.00              0.00              0.00              6867               216
Max             0.00              0.00              0.00          52066354           1629722

iot_data/device_data histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00             29.52              0.00              2299               149
75%             0.00             42.51              0.00            182785              9887
95%             0.00             61.21              0.00           2816159            152321
98%             0.00             61.21              0.00           4055269            219342
99%             0.00             61.21              0.00          17436917           1131752
Min             0.00             17.09              0.00                43                 0
Max             0.00             61.21              0.00          74975550           4866323

iot_data/device_data_by_week_sensor histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                          (micros)          (micros)           (bytes)
50%             0.00             35.43              0.00              8239               446
75%             0.00             51.01              0.00            152321              8239
95%             0.00             61.21              0.00           2816159            152321
98%             0.00             61.21              0.00           4055269            219342
99%             0.00             61.21              0.00          12108970            785939
Min             0.00             20.50              0.00                43                 0
Max             0.00             61.21              0.00          74975550           4866323

Altough I know that the derived_device_data // derived_device_data_by_year tables need some refactoring, none of them is close to the 100MB mark. Why am I getting this log entry though?

EDIT: I noticed the same log entry on my test systems, which are running almost without data but with the same configuration as prod. 12GB RAM, cassandra 3.11.2


回答1:


You may need to check the value of vm.max_map_count and and settings for swap. If swap is enabled, then it could affect performance of the both systems. Default value of vm.max_map_count could be also too low, and affect both Cassandra and Elasticsearch (see recommendation for ES).

Also, you may need explicitly set heap size for Cassandra and file_cache_size_in_mb - with 12Gb RAM, Cassandra will use 1/4 that is 3Gb, and file_cache_size_in_mb will be ~750Mb (1/4 of heap) - that could be too low.

P.S. Because it's logged as INFO, then it's considered harmless. See https://issues.apache.org/jira/browse/CASSANDRA-12221 & https://issues.apache.org/jira/browse/CASSANDRA-11681




回答2:


Not sure about this specific problem, but maybe try checking the jvm.options file in your Cassandra's config dir. You might want to increase the Xmx or other settings.



来源:https://stackoverflow.com/questions/51591253/cassandra-nospamlogger-log-maximum-memory-usage-reached

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!