why does the redis memory usage not reduce when del half of keys

后端 未结 3 1637
生来不讨喜
生来不讨喜 2020-12-14 08:23

Redis is used to save data but it costs a lot of memory, and its memory usage up to 52.5%. I deleted half of the keys in redis, and the return code of the delete operation i

3条回答
  •  没有蜡笔的小新
    2020-12-14 09:03

    A good starting point is to use the Redis CLI command: MEMORY DOCTOR.
    It can give you very valuable information and point you to the potential issue.

    some useful links:
    MEMORY DOCTOR command docs
    What is defragmentation and what are the Redis defragmentation configs

    example:

    • Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.
    • High total RSS: This instance has a memory fragmentation and RSS overhead greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc. Note: The currently used allocator is "jemalloc-5.1.0".
    • High allocator fragmentation: This instance has an allocator external fragmentation greater than 1.1. This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. You can try enabling 'activedefrag' config option.

提交回复
热议问题