The Redis platform offers a variety of self-service tickets for deleting data (dozens of tickets per day)
One afternoon, a business suddenly came to me and said that his cluster had taken 3 times more time after doing a prefix deletion, and the work order was as follows: a prefix deletion was made, and the data volume was from 290 million to 60 million >.
Quickly go through the relevant indicators with doubts and come to the following conclusions:
On the whole, there is no increase in traffic, no change in the main adjustment party, a large amount of data is deleted, the CPU rise is tolerated (but the business may not tolerate it), the decision to turn off defragmentation is immediately made, and the service is restored instantaneously.
I think I have some understanding of the fragmentation rate and related practices, but there is still a problem, so I am ready to study it carefully and share it with you.
In order to better manage and reuse memory, the allocation strategy generally uses a fixed range of memory blocks for allocation. By default, Redis uses jemalloc to divide the memory space into three ranges: small, large, and huge, and each range is divided into several small memory block units:
That is, if you apply for a 20-byte, then Jemalloc will directly apply for 32 bytes of space, this allocation strategy will inevitably have a problem, there are some scattered small space allocation can not be allocated (memory fragmentation), such as (demonstration needs)
(1) Starting from 0 to apply for 6 bytes, 3 bytes, 3 bytes, 3 bytes, as you can see from the figure, 3 * 8 bytes were opened, but the actual use of (6 + 3 + 3 + 3) bytes, free 9 bytes (that is, fragments, of course, if there is a continued application later, it may fill up)
Typically, more memory fragmentation may occur:
For example, after the data mentioned in the first section is deleted, the fragmentation rate increases dramatically
This situation generally occurs when the operating system swaps Redis memory (Swap) to the hard disk, which must be closely monitored, and Redis performance will drop sharply, or even hang dead.
The correct thing to do is to turn off swap (Redis configuration optimization on Linux systems), and for the most part, it is much better to really die than to hang dead.
This situation indicates that the memory is not fully utilized, the higher the fragmentation rate, the more serious the waste (consider the difference between 1 cluster and 10,000 clusters), we assume that there is a daily full 100GB (💰 one master and one slave) cluster, the performance of different fragmentation rates is as follows:
From the perspective of Redis’s memory allocation model, the fragmentation rate is certain, but the degree to which the fragmentation rate is high and needs to be governed is a specific analysis of the specific situation, and some “best practices” will be given at the end of this article.
Redis provides the info memory command to view the relevant memory conditions:
There are three indicators that can describe the fragmentation rate, each with the following meanings
Summary: The above methods are either unrealistic, or they do not cure the symptoms and do not cure the root causes.
After the release of Redis 4.0, defragmentation is available
A picture is worth a thousand words: 24 bytes become 16 bytes after being sorted out
Redis provides an activedefrag configuration to enable defragmentation (off by default)
For better control and management of Redis defragmentation, several parameters are also provided
Three conditions are met simultaneously:
This parameter is very confusing, as if it is the upper limit of defragmentation relative to active-defrag-threshold-lower, but it is not, it is an enhancement factor:
INTERPOLATE is defined as follows:
Organize the formula as follows:
To apply the formula:
You can see that the larger the active_defrag_threshold_upper, the smaller the overall value, which is the estimated CPU value used for defragmentation.
active-defrag-cycle-min and active-defrag-cycle-max are the minimum CPU time and maximum CPU time of each defragmentation, and the cpu_pct calculated by the above formula will still be limited by these two parameters:
It should be noted that the default values of these two parameters are not the same in different versions of Redis:
Defragmentation requires a Redis dictionary scan, and if you find that the number of elements in set/hash/zset/list/stream exceeds 1000 in progress, then put these key values in a separate queue and process them, mainly to prevent defragmentation from timing out (if a scan contains a lot of large key values at one time, it may have timed out internally). )。
Related Codes:
defragLater:
Fragmentation (activeExpireCycle) is performed in the time event of Redis under serverCron->databasesCron->activeExpireCycle
Related Codes:
And hz is synchronized, by default 1 second to execute ten times.
For each db, scan is performed with the following callback:
The logic of defragDictBucketCallback:
defragScanCallback logic
defragKey organizes the sds key and various types of values in dictEntry (complex types are for loop collation)
activeDefragSds defragments for possible fragments of SDS
activeDefragAlloc is the core method, which calls the jemalloc related function to determine whether there is a collation value, and if necessary, reorganize (generate new release old)
where int je_get_defrag_hint (void* ptr, int *bin_util, int *run_util) are jemalloc library functions used to determine whether it is worth tidying up.
From the perspective of each key, whether it hits defragmentation.
Standing on every key-values (if it is a complex data structure, it may be multiple times), whether to hit defragmentation.
For example, a hash key-value pair has 200 elements, of which there are elements to be defragmented:
Best “best” practice
Here’s the focus on active-defrag-cycle-min and active-defrag-cycle-max, because improper settings can affect Redis performance or availability.
It should be noted that the default values of these two parameters are not the same in different versions of Redis:
Best “best” practice (be sure not to use the default)
My online configuration is Redis 6+, active-defrag-cycle-min:5, active-defrag-cycle-max:10
Best “best” practice
(1) Redis 4: The experimental version, in addition to the big key problem, the overall is fine, if you use it and do not have to worry about its stability
(2) Redis5: is the second version of defragmentation (for example, solved the big key problem),
But it’s still a trial version:
(3) Larger than Redis 6: Release
Best “best” practice
(1) Redis 3 samples are less and can be ignored
(2) Redis 4 and 6 are defragmented if there is no special
(3) Redis 6’s fragmentation rate is indeed lower than Redis 4’s
(4) Different versions of jemalloc are different, and the guess must be getting better and better (this piece does not know much, is learning, please understand the big guy advice)
Best “best” practice
In addition to the defragmentation configuration, it can also be implemented by using the following command
But it’s important to note that unlike defrag, they don’t act on the same block of memory, and memory purge is mainly used to clean up dirty pages:
Through the principle and the actual scene, the finishing effect of memory purge is not as obvious as defrag, and it can be used in combination if necessary.
This has to do with scale, if the scale is large, can you turn a blind eye?
Error: Literal translation
That’s right:
This parameter is very confusing, as if relative to active-defrag-threshold-lower is the upper limit of defragmentation, in fact, it is not, it is an enhancement factor, this online many explanations are problematic.
Error: It’s two things being the same thing
Correct: they are not exactly equal, you can see the description in the source code,
Welcome to subscribe to my public account: pay attention to the problems related to Redis development and operation and operation, and kill all the pits.
If you need to forward it, please contact the owner, it is not easy to write an article.