1 ##redis配置详解
2
3 # Redis configuration file example.
4 #
5 # Note that in order to read the configuration file, Redis must be
6 # started with the file path as first argument:
7 #
8 # ./redis-server /path/to/redis.conf
9
10 # Note on units: when memory size is needed, it is possible to specify
11 # it in the usual form of 1k 5GB 4M and so forth:
12 #
13 # 1k => 1000 bytes
14 # 1kb => 1024 bytes
15 # 1m => 1000000 bytes
16 # 1mb => 1024*1024 bytes
17 # 1g => 1000000000 bytes
18 # 1gb => 1024*1024*1024 bytes
19 #
20 # units are case insensitive so 1GB 1Gb 1gB are all the same.
21
22 ################################## INCLUDES ###################################
23 ################################## 包含 ###################################
24
25 # Include one or more other config files here. This is useful if you
26 # have a standard template that goes to all Redis servers but also need
27 # to customize a few per-server settings. Include files can include
28 # other files, so use this wisely.
29 #
30 # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
31 # from admin or Redis Sentinel. Since Redis always uses the last processed
32 # line as value of a configuration directive, you'd better put includes
33 # at the beginning of this file to avoid overwriting config change at runtime.
34 #
35 # If instead you are interested in using includes to override configuration
36 # options, it is better to use include as the last line.
37 #
38 # 假如说你有一个可用于所有的 redis server 的标准配置模板,
39 # 但针对某些 server 又需要一些个性化的设置,
40 # 你可以使用 include 来包含一些其他的配置文件,这对你来说是非常有用的。
41 #
42 # 但是要注意哦,include 是不能被 config rewrite 命令改写的
43 # 由于 redis 总是以最后的加工线作为一个配置指令值,所以你最好是把 include 放在这个文件的最前面,
44 # 以避免在运行时覆盖配置的改变,相反,你就把它放在后面
45 # include /path/to/local.conf
46 # include /path/to/other.conf
47
48 ################################ GENERAL #####################################
49 ################################ 常用 #####################################
50
51 # By default Redis does not run as a daemon. Use 'yes' if you need it.
52 # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
53 # 默认情况下 redis 不是作为守护进程运行的,如果你想让它在后台运行,你就把它改成 yes。
54 # 当redis作为守护进程运行的时候,它会写一个 pid 到 /var/run/redis.pid 文件里面。
55 daemonize yes
56
57 # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
58 # default. You can specify a custom pid file location here.
59 # 当 Redis 以守护进程的方式运行的时候,Redis 默认会把 pid 文件放在/var/run/redis.pid
60 # 可配置到其他地址,当运行多个 redis 服务时,需要指定不同的 pid 文件和端口
61 # 指定存储Redis进程号的文件路径
62 pidfile /var/run/redis.pid
63
64 # Accept connections on the specified port, default is 6379.
65 # If port 0 is specified Redis will not listen on a TCP socket.
66 # 端口,默认端口是6379,生产环境中建议更改端口号,安全性更高
67 # 如果你设为 0 ,redis 将不在 socket 上监听任何客户端连接。
68 port 9966
69
70 # TCP listen() backlog.
71 #
72 # In high requests-per-second environments you need an high backlog in order
73 # to avoid slow clients connections issues. Note that the Linux kernel
74 # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
75 # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
76 # in order to get the desired effect.
77 # TCP 监听的最大容纳数量
78 # 此参数确定了TCP连接中已完成队列(完成三次握手之后)的长度,
79 # 当系统并发量大并且客户端速度缓慢的时候,你需要把这个值调高以避免客户端连接缓慢的问题。
80 # Linux 内核会一声不响的把这个值缩小成 /proc/sys/net/core/somaxconn 对应的值,默认是511,而Linux的默认参数值是128。
81 # 所以可以将这二个参数一起参考设定,你以便达到你的预期。
82 #
83 tcp-backlog 511
84
85 # By default Redis listens for connections from all the network interfaces
86 # available on the server. It is possible to listen to just one or multiple
87 # interfaces using the "bind" configuration directive, followed by one or
88 # more IP addresses.
89 #
90 # Examples:
91 #
92 # bind 192.168.1.100 10.0.0.1
93 # 有时候为了安全起见,redis一般都是监听127.0.0.1 但是有时候又有同网段能连接的需求,当然可以绑定0.0.0.0 用iptables来控制访问权限,或者设置redis访问密码来保证数据安全
94
95 # 不设置将处理所有请求,建议生产环境中设置,有个误区:bind是用来限制外网IP访问的,其实不是,限制外网ip访问可以通过iptables;如:-A INPUT -s 10.10.1.0/24 -p tcp -m state --state NEW -m tcp --dport 9966 -j ACCEPT ;
96 # 实际上,bind ip 绑定的是redis所在服务器网卡的ip,当然127.0.0.1也是可以的
97 #如果绑定一个外网ip,就会报错:Creating Server TCP listening socket xxx.xxx.xxx.xxx:9966: bind: Cannot assign requested address
98
99 # bind 127.0.0.1
100 bind 127.0.0.1 10.10.1.3
101
102 # 假设绑定是以上ip,使用 netstat -anp|grep 9966 会发现,这两个ip被bind,其中10.10.1.3是服务器网卡的ip
103 # tcp 0 0 10.10.1.3:9966 0.0.0.0:* LISTEN 11188/redis-server
104 # tcp 0 0 127.0.0.1:9966 0.0.0.0:* LISTEN 11188/redis-server
105
106
107 # Specify the path for the Unix socket that will be used to listen for
108 # incoming connections. There is no default, so Redis will not listen
109 # on a unix socket when not specified.
110 #
111 # unixsocket /tmp/redis.sock
112 # unixsocketperm 700
113
114 # Close the connection after a client is idle for N seconds (0 to disable)
115 # 客户端和Redis服务端的连接超时时间,默认是0,表示永不超时。
116 timeout 0
117
118 # TCP keepalive.
119 #
120 # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
121 # of communication. This is useful for two reasons:
122 #
123 # 1) Detect dead peers.
124 # 2) Take the connection alive from the point of view of network
125 # equipment in the middle.
126 #
127 # On Linux, the specified value (in seconds) is the period used to send ACKs.
128 # Note that to close the connection the double of the time is needed.
129 # On other kernels the period depends on the kernel configuration.
130 #
131 # A reasonable value for this option is 60 seconds.
132
133 # tcp 心跳包。
134 #
135 # 如果设置为非零,则在与客户端缺乏通讯的时候使用 SO_KEEPALIVE 发送 tcp acks 给客户端。
136 # 这个之所有有用,主要由两个原因:
137 #
138 # 1) 防止死的 peers
139 # 2) Take the connection alive from the point of view of network
140 # equipment in the middle.
141 #
142 # 推荐一个合理的值就是60秒
143 tcp-keepalive 0
144
145 # Specify the server verbosity level.
146 # This can be one of:
147 # debug (a lot of information, useful for development/testing)
148 # verbose (many rarely useful info, but not a mess like the debug level)
149 # notice (moderately verbose, what you want in production probably)
150 # warning (only very important / critical messages are logged)
151 # 日志记录等级,4个可选值debug,verbose,notice,warning
152 # 可以是下面的这些值:
153 # debug (适用于开发或测试阶段)
154 # verbose (many rarely useful info, but not a mess like the debug level)
155 # notice (适用于生产环境)
156 # warning (仅仅一些重要的消息被记录)
157 loglevel notice
158
159 # Specify the log file name. Also the empty string can be used to force
160 # Redis to log on the standard output. Note that if you use standard
161 # output for logging but daemonize, logs will be sent to /dev/null
162 #配置 log 文件地址,默认打印在命令行终端的窗口上,也可设为/dev/null屏蔽日志、
163 logfile "/data/logs/redis/redis.log"
164
165 # To enable logging to the system logger, just set 'syslog-enabled' to yes,
166 # and optionally update the other syslog parameters to suit your needs.
167 # 要想把日志记录到系统日志,就把它改成 yes,
168 # 也可以可选择性的更新其他的syslog 参数以达到你的要求
169 # syslog-enabled no
170
171 # Specify the syslog identity.
172 # 设置 syslog 的 identity。
173 # syslog-ident redis
174
175 # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
176 # syslog-facility local0
177
178 # Set the number of databases. The default database is DB 0, you can select
179 # a different one on a per-connection basis using SELECT <dbid> where
180 # dbid is a number between 0 and 'databases'-1
181 # 可用的数据库数,默认值为16,默认数据库为0,数据库范围在0-(database-1)之间
182 databases 16
183
184 ################################ SNAPSHOTTING ################################
185 ################################ 快照 ################################
186 #
187 # Save the DB on disk:
188 #
189 # save <seconds> <changes>
190 #
191 # Will save the DB if both the given number of seconds and the given
192 # number of write operations against the DB occurred.
193 #
194 # In the example below the behaviour will be to save:
195 # after 900 sec (15 min) if at least 1 key changed
196 # after 300 sec (5 min) if at least 10 keys changed
197 # after 60 sec if at least 10000 keys changed
198 #
199 # Note: you can disable saving completely by commenting out all "save" lines.
200 #
201 # It is also possible to remove all the previously configured save
202 # points by adding a save directive with a single empty string argument
203 # like in the following example:
204 #
205 # save ""
206 # 在 900 秒内最少有 1 个 key 被改动,或者 300 秒内最少有 10 个 key 被改动,又或者 60 秒内最少有 1000 个 key 被改动,以上三个条件随便满足一个,就触发一次保存操作。
207
208 # if(在60秒之内有10000个keys发生变化时){
209 # 进行镜像备份
210 # }else if(在300秒之内有10个keys发生了变化){
211 # 进行镜像备份
212 # }else if(在900秒之内有1个keys发生了变化){
213 # 进行镜像备份
214 # }
215
216 save 900 1
217 save 300 10
218 save 60 10000
219
220 # By default Redis will stop accepting writes if RDB snapshots are enabled
221 # (at least one save point) and the latest background save failed.
222 # This will make the user aware (in a hard way) that data is not persisting
223 # on disk properly, otherwise chances are that no one will notice and some
224 #:/ disaster will happen.
225 #
226 # If the background saving process will start working again Redis will
227 # automatically allow writes again.
228 #
229 # However if you have setup your proper monitoring of the Redis server
230 # and persistence, you may want to disable this feature so that Redis will
231 # continue to work as usual even if there are problems with disk,
232 # permissions, and so forth.
233 # 默认情况下,如果 redis 最后一次的后台保存失败,redis 将停止接受写操作,
234 # 这样以一种强硬的方式让用户知道数据不能正确的持久化到磁盘,
235 # 否则就会没人注意到灾难的发生。
236 #
237 # 如果后台保存进程重新启动工作了,redis 也将自动的允许写操作。
238 #
239 # 然而你要是安装了靠谱的监控,你可能不希望 redis 这样做,那你就改成 no 好
240 stop-writes-on-bgsave-error yes
241
242 # Compress string objects using LZF when dump .rdb databases?
243 # For default that's set to 'yes' as it's almost always a win.
244 # If you want to save some CPU in the saving child set it to 'no' but
245 # the dataset will likely be bigger if you have compressible values or keys.
246 # 在进行备份时,是否进行压缩
247 # 是否在 dump .rdb 数据库的时候使用 LZF 压缩字符串
248 # 默认都设为 yes
249 # 如果你希望保存子进程节省点 cpu ,你就设置它为 no ,
250 # 不过这个数据集可能就会比较大
251 rdbcompression yes
252
253 # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
254 # This makes the format more resistant to corruption but there is a performance
255 # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
256 # for maximum performances.
257 #
258 # RDB files created with checksum disabled have a checksum of zero that will
259 # tell the loading code to skip the check.
260 # 读取和写入的时候是否支持CRC64校验,默认是开启的
261 rdbchecksum yes
262
263 # The filename where to dump the DB
264 # 备份文件的文件名
265 dbfilename dump.rdb
266
267 # The working directory.
268 #
269 # The DB will be written inside this directory, with the filename specified
270 # above using the 'dbfilename' configuration directive.
271 #
272 # The Append Only File will also be created inside this directory.
273 #
274 # Note that you must specify a directory here, not a file name.
275 # 数据库备份的文件放置的路径
276 # 路径跟文件名分开配置是因为 Redis 备份时,先会将当前数据库的状态写入到一个临时文件
277 # 等备份完成时,再把该临时文件替换为上面所指定的文件
278 # 而临时文件和上面所配置的备份文件都会放在这个指定的路径当中
279 # 默认值为 ./
280 dir /data/data/redis/
281
282 ################################# REPLICATION #################################
283 ################################# 主从复制 #################################
284 # Master-Slave replication. Use slaveof to make a Redis instance a copy of
285 # another Redis server. A few things to understand ASAP about Redis replication.
286 #
287 # 1) Redis replication is asynchronous, but you can configure a master to
288 # stop accepting writes if it appears to be not connected with at least
289 # a given number of slaves.
290 # 2) Redis slaves are able to perform a partial resynchronization with the
291 # master if the replication link is lost for a relatively small amount of
292 # time. You may want to configure the replication backlog size (see the next
293 # sections of this file) with a sensible value depending on your needs.
294 # 3) Replication is automatic and does not need user intervention. After a
295 # network partition slaves automatically try to reconnect to masters
296 # and resynchronize with them.
297 #
298 # 设置该数据库为其他数据库的从数据库
299 # slaveof <masterip> <masterport> 当本机为从服务时,设置主服务的IP及端口
300 # slaveof <masterip> <masterport>
301
302 # If the master is password protected (using the "requirepass" configuration
303 # directive below) it is possible to tell the slave to authenticate before
304 # starting the replication synchronization process, otherwise the master will
305 # refuse the slave request.
306 #
307 # 指定与主数据库连接时需要的密码验证
308 # masterauth <master-password> 当本机为从服务时,设置访问master服务器的密码
309 # masterauth <master-password>
310
311 # When a slave loses its connection with the master, or when the replication
312 # is still in progress, the slave can act in two different ways:
313 #
314 # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
315 # still reply to client requests, possibly with out of date data, or the
316 # data set may just be empty if this is the first synchronization.
317 #
318 # 2) if slave-serve-stale-data is set to 'no' the slave will reply with
319 # an error "SYNC with master in progress" to all the kind of commands
320 # but to INFO and SLAVEOF.
321 #
322 # 当slave服务器和master服务器失去连接后,或者当数据正在复制传输的时候,如果此参数值设置“yes”,slave服务器可以继续接受客户端的请求,否则,会返回给请求的客户端如下信息“SYNC with master in progress”,除了INFO,SLAVEOF这两个命令
323 slave-serve-stale-data yes
324
325 # You can configure a slave instance to accept writes or not. Writing against
326 # a slave instance may be useful to store some ephemeral data (because data
327 # written on a slave will be easily deleted after resync with the master) but
328 # may also cause problems if clients are writing to it because of a
329 # misconfiguration.
330 #
331 # Since Redis 2.6 by default slaves are read-only.
332 #
333 # Note: read only slaves are not designed to be exposed to untrusted clients
334 # on the internet. It's just a protection layer against misuse of the instance.
335 # Still a read only slave exports by default all the administrative commands
336 # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
337 # security of read only slaves using 'rename-command' to shadow all the
338 # administrative / dangerous commands.
339 # 是否允许slave服务器节点只提供读服务
340 slave-read-only yes
341
342 # Replication SYNC strategy: disk or socket.
343 #
344 # -------------------------------------------------------
345 # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
346 # -------------------------------------------------------
347 #
348 # New slaves and reconnecting slaves that are not able to continue the replication
349 # process just receiving differences, need to do what is called a "full
350 # synchronization". An RDB file is transmitted from the master to the slaves.
351 # The transmission can happen in two different ways:
352 #
353 # 1) Disk-backed: The Redis master creates a new process that writes the RDB
354 # file on disk. Later the file is transferred by the parent
355 # process to the slaves incrementally.
356 # 2) Diskless: The Redis master creates a new process that directly writes the
357 # RDB file to slave sockets, without touching the disk at all.
358 #
359 # With disk-backed replication, while the RDB file is generated, more slaves
360 # can be queued and served with the RDB file as soon as the current child producing
361 # the RDB file finishes its work. With diskless replication instead once
362 # the transfer starts, new slaves arriving will be queued and a new transfer
363 # will start when the current one terminates.
364 #
365 # When diskless replication is used, the master waits a configurable amount of
366 # time (in seconds) before starting the transfer in the hope that multiple slaves
367 # will arrive and the transfer can be parallelized.
368 #
369 # With slow disks and fast (large bandwidth) networks, diskless replication
370 # works better.
371 repl-diskless-sync no
372
373 # When diskless replication is enabled, it is possible to configure the delay
374 # the server waits in order to spawn the child that transfers the RDB via socket
375 # to the slaves.
376 #
377 # This is important since once the transfer starts, it is not possible to serve
378 # new slaves arriving, that will be queued for the next RDB transfer, so the server
379 # waits a delay in order to let more slaves arrive.
380 #
381 # The delay is specified in seconds, and by default is 5 seconds. To disable
382 # it entirely just set it to 0 seconds and the transfer will start ASAP.
383 repl-diskless-sync-delay 5
384
385 # Slaves send PINGs to server in a predefined interval. It's possible to change
386 # this interval with the repl_ping_slave_period option. The default value is 10
387 # seconds.
388 #
389 # Slaves 在一个预定义的时间间隔内发送 ping 命令到 server 。
390 # 你可以改变这个时间间隔。默认为 10 秒。
391 # repl-ping-slave-period 10
392
393 # The following option sets the replication timeout for:
394 #
395 # 1) Bulk transfer I/O during SYNC, from the point of view of slave.
396 # 2) Master timeout from the point of view of slaves (data, pings).
397 # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
398 #
399 # It is important to make sure that this value is greater than the value
400 # specified for repl-ping-slave-period otherwise a timeout will be detected
401 # every time there is low traffic between the master and the slave.
402 #
403 # 设置主从复制过期时间
404 # 这个值一定要比 repl-ping-slave-period 大
405 # repl-timeout 60
406
407 # Disable TCP_NODELAY on the slave socket after SYNC?
408 #
409 # If you select "yes" Redis will use a smaller number of TCP packets and
410 # less bandwidth to send data to slaves. But this can add a delay for
411 # the data to appear on the slave side, up to 40 milliseconds with
412 # Linux kernels using a default configuration.
413 #
414 # If you select "no" the delay for data to appear on the slave side will
415 # be reduced but more bandwidth will be used for replication.
416 #
417 # By default we optimize for low latency, but in very high traffic conditions
418 # or when the master and slaves are many hops away, turning this to "yes" may
419 # be a good idea.
420 # 指定向slave同步数据时,是否禁用socket的NO_DELAY选 项。若配置为“yes”,则禁用NO_DELAY,则TCP协议栈会合并小包统一发送,这样可以减少主从节点间的包数量并节省带宽,但会增加数据同步到 slave的时间。若配置为“no”,表明启用NO_DELAY,则TCP协议栈不会延迟小包的发送时机,这样数据同步的延时会减少,但需要更大的带宽。 通常情况下,应该配置为no以降低同步延时,但在主从节点间网络负载已经很高的情况下,可以配置为yes。
421 repl-disable-tcp-nodelay no
422
423 # Set the replication backlog size. The backlog is a buffer that accumulates
424 # slave data when slaves are disconnected for some time, so that when a slave
425 # wants to reconnect again, often a full resync is not needed, but a partial
426 # resync is enough, just passing the portion of data the slave missed while
427 # disconnected.
428 #
429 # The bigger the replication backlog, the longer the time the slave can be
430 # disconnected and later be able to perform a partial resynchronization.
431 #
432 # The backlog is only allocated once there is at least a slave connected.
433 #
434 # 设置主从复制容量大小。这个 backlog 是一个用来在 slaves 被断开连接时
435 # 存放 slave 数据的 buffer,所以当一个 slave 想要重新连接,通常不希望全部重新同步,
436 # 只是部分同步就够了,仅仅传递 slave 在断开连接时丢失的这部分数据。
437 #
438 # The biggest the replication backlog, the longer the time the slave can be
439 # disconnected and later be able to perform a partial resynchronization.
440 # 这个值越大,salve 可以断开连接的时间就越长。
441
442 # repl-backlog-size 1mb
443
444 # After a master has no longer connected slaves for some time, the backlog
445 # will be freed. The following option configures the amount of seconds that
446 # need to elapse, starting from the time the last slave disconnected, for
447 # the backlog buffer to be freed.
448 #
449 # A value of 0 means to never release the backlog.
450 #
451 # 在某些时候,master 不再连接 slaves,backlog 将被释放。
452 # 如果设置为 0 ,意味着绝不释放 backlog 。
453 # repl-backlog-ttl 3600
454
455 # The slave priority is an integer number published by Redis in the INFO output.
456 # It is used by Redis Sentinel in order to select a slave to promote into a
457 # master if the master is no longer working correctly.
458 #
459 # A slave with a low priority number is considered better for promotion, so
460 # for instance if there are three slaves with priority 10, 100, 25 Sentinel will
461 # pick the one with priority 10, that is the lowest.
462 #
463 # However a special priority of 0 marks the slave as not able to perform the
464 # role of master, so a slave with priority of 0 will never be selected by
465 # Redis Sentinel for promotion.
466 #
467 # By default the priority is 100.
468 # 指定slave的优先级。在不只1个slave存在的部署环境下,当master宕机时,Redis
469 # Sentinel会将priority值最小的slave提升为master。
470 # 这个值越小,就越会被优先选中,需要注意的是,
471 # 若该配置项为0,则对应的slave永远不会自动提升为master。
472 slave-priority 100
473
474 # It is possible for a master to stop accepting writes if there are less than
475 # N slaves connected, having a lag less or equal than M seconds.
476 #
477 # The N slaves need to be in "online" state.
478 #
479 # The lag in seconds, that must be <= the specified value, is calculated from
480 # the last ping received from the slave, that is usually sent every second.
481 #
482 # This option does not GUARANTEE that N replicas will accept the write, but
483 # will limit the window of exposure for lost writes in case not enough slaves
484 # are available, to the specified number of seconds
485 #
486 # For example to require at least 3 slaves with a lag <= 10 seconds use:
487 #
488 # min-slaves-to-write 3
489 # min-slaves-max-lag 10
490 #
491 # Setting one or the other to 0 disables the feature.
492 #
493 # By default min-slaves-to-write is set to 0 (feature disabled) and
494 # min-slaves-max-lag is set to 10.
495
496 ################################## SECURITY ###################################
497 ################################## 安全 ###################################
498
499 # Require clients to issue AUTH <PASSWORD> before processing any other
500 # commands. This might be useful in environments in which you do not trust
501 # others with access to the host running redis-server.
502 #
503 # This should stay commented out for backward compatibility and because most
504 # people do not need auth (e.g. they run their own servers).
505 #
506 # Warning: since Redis is pretty fast an outside user can try up to
507 # 150k passwords per second against a good box. This means that you should
508 # use a very strong password otherwise it will be very easy to break.
509 #
510 # 设置连接redis的密码
511 # redis速度相当快,一个外部用户在一秒钟进行150K次密码尝试,需指定强大的密码来防止暴力破解
512 requirepass set_enough_strong_passwd
513
514 # Command renaming.
515 #
516 # It is possible to change the name of dangerous commands in a shared
517 # environment. For instance the CONFIG command may be renamed into something
518 # hard to guess so that it will still be available for internal-use tools
519 # but not available for general clients.
520 #
521 # Example:
522 #
523 # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
524 #
525 # It is also possible to completely kill a command by renaming it into
526 # an empty string:
527 #
528 # rename-command CONFIG ""
529 #
530 # Please note that changing the name of commands that are logged into the
531 # AOF file or transmitted to slaves may cause problems.
532 # 重命名一些高危命令,用来禁止高危命令
533 rename-command FLUSHALL ZYzv6FOBdwflW2nX
534 rename-command CONFIG aI7zwm1GDzMMrEi
535 rename-command EVAL S9UHPKEpSvUJMM
536 rename-command FLUSHDB D60FPVDJuip7gy6l
537
538 ################################### LIMITS ####################################
539 ################################### 限制 ####################################
540
541 # Set the max number of connected clients at the same time. By default
542 # this limit is set to 10000 clients, however if the Redis server is not
543 # able to configure the process file limit to allow for the specified limit
544 # the max number of allowed clients is set to the current file limit
545 # minus 32 (as Redis reserves a few file descriptors for internal uses).
546 #
547 # Once the limit is reached Redis will close all the new connections sending
548 # an error 'max number of clients reached'.
549 #
550 # 限制同时连接的客户数量,默认是10000
551 # 当连接数超过这个值时,redis 将不再接收其他连接请求,客户端尝试连接时将收到 error 信息
552 # maxclients 10000
553
554 # Don't use more memory than the specified amount of bytes.
555 # When the memory limit is reached Redis will try to remove keys
556 # according to the eviction policy selected (see maxmemory-policy).
557 #
558 # If Redis can't remove keys according to the policy, or if the policy is
559 # set to 'noeviction', Redis will start to reply with errors to commands
560 # that would use more memory, like SET, LPUSH, and so on, and will continue
561 # to reply to read-only commands like GET.
562 #
563 # This option is usually useful when using Redis as an LRU cache, or to set
564 # a hard memory limit for an instance (using the 'noeviction' policy).
565 #
566 # WARNING: If you have slaves attached to an instance with maxmemory on,
567 # the size of the output buffers needed to feed the slaves are subtracted
568 # from the used memory count, so that network problems / resyncs will
569 # not trigger a loop where keys are evicted, and in turn the output
570 # buffer of slaves is full with DELs of keys evicted triggering the deletion
571 # of more keys, and so forth until the database is completely emptied.
572 #
573 # In short... if you have slaves attached it is suggested that you set a lower
574 # limit for maxmemory so that there is some free RAM on the system for slave
575 # output buffers (but this is not needed if the policy is 'noeviction').
576 #
577 # 设置redis能够使用的最大内存。
578 # 达到最大内存设置后,Redis会先尝试清除已到期或即将到期的Key(设置过expire信息的key)
579 # 在删除时,按照过期时间进行删除,最早将要被过期的key将最先被删除
580 # 如果已到期或即将到期的key删光,仍进行set操作,那么将返回错误
581 # 此时redis将不再接收写请求,只接收get请求。
582 # maxmemory的设置比较适合于把redis当作于类似memcached 的缓存来使用
583 # maxmemory <bytes>
584
585 # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
586 # is reached. You can select among five behaviors:
587 #
588 # volatile-lru -> remove the key with an expire set using an LRU algorithm
589 # allkeys-lru -> remove any key according to the LRU algorithm
590 # volatile-random -> remove a random key with an expire set
591 # allkeys-random -> remove a random key, any key
592 # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
593 # noeviction -> don't expire at all, just return an error on write operations
594 #
595 # Note: with any of the above policies, Redis will return an error on write
596 # operations, when there are no suitable keys for eviction.
597 #
598 # At the date of writing these commands are: set setnx setex append
599 # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
600 # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
601 # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
602 # getset mset msetnx exec sort
603 #
604 # The default is:
605 #
606 # maxmemory-policy noeviction
607
608 # LRU and minimal TTL algorithms are not precise algorithms but approximated
609 # algorithms (in order to save memory), so you can tune it for speed or
610 # accuracy. For default Redis will check five keys and pick the one that was
611 # used less recently, you can change the sample size using the following
612 # configuration directive.
613 #
614 # The default of 5 produces good enough results. 10 Approximates very closely
615 # true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
616 #
617 # maxmemory-samples 5
618
619 ############################## APPEND ONLY MODE ###############################
620
621 # By default Redis asynchronously dumps the dataset on disk. This mode is
622 # good enough in many applications, but an issue with the Redis process or
623 # a power outage may result into a few minutes of writes lost (depending on
624 # the configured save points).
625 #
626 # The Append Only File is an alternative persistence mode that provides
627 # much better durability. For instance using the default data fsync policy
628 # (see later in the config file) Redis can lose just one second of writes in a
629 # dramatic event like a server power outage, or a single write if something
630 # wrong with the Redis process itself happens, but the operating system is
631 # still running correctly.
632 #
633 # AOF and RDB persistence can be enabled at the same time without problems.
634 # If the AOF is enabled on startup Redis will load the AOF, that is the file
635 # with the better durability guarantees.
636 #
637 # Please check http://redis.io/topics/persistence for more information.
638
639 # redis 默认每次更新操作后会在后台异步的把数据库镜像备份到磁盘,但该备份非常耗时,且备份不宜太频繁
640 # redis 同步数据文件是按上面save条件来同步的
641 # 如果发生诸如拉闸限电、拔插头等状况,那么将造成比较大范围的数据丢失
642 # 所以redis提供了另外一种更加高效的数据库备份及灾难恢复方式
643 # 开启append only 模式后,redis 将每一次写操作请求都追加到appendonly.aof 文件中
644 # redis重新启动时,会从该文件恢复出之前的状态。
645 # 但可能会造成 appendonly.aof 文件过大,所以redis支持BGREWRITEAOF 指令,对appendonly.aof重新整理,默认是不开启的。
646
647 appendonly no
648
649 # The name of the append only file (default: "appendonly.aof")
650 # 默认为appendonly.aof。
651 appendfilename "appendonly.aof"
652
653 # The fsync() call tells the Operating System to actually write data on disk
654 # instead of waiting for more data in the output buffer. Some OS will really flush
655 # data on disk, some other OS will just try to do it ASAP.
656 #
657 # Redis supports three different modes:
658 #
659 # no: don't fsync, just let the OS flush the data when it wants. Faster.
660 # always: fsync after every write to the append only log. Slow, Safest.
661 # everysec: fsync only one time every second. Compromise.
662 #
663 # The default is "everysec", as that's usually the right compromise between
664 # speed and data safety. It's up to you to understand if you can relax this to
665 # "no" that will let the operating system flush the output buffer when
666 # it wants, for better performances (but if you can live with the idea of
667 # some data loss consider the default persistence mode that's snapshotting),
668 # or on the contrary, use "always" that's very slow but a bit safer than
669 # everysec.
670 #
671 # More details please check the following article:
672 # http://antirez.com/post/redis-persistence-demystified.html
673 #
674 # If unsure, use "everysec".
675
676 # 设置对 appendonly.aof 文件进行同步的频率,有三种选择always、everysec、no,默认是everysec表示每秒同步一次。
677 # always 表示每次有写操作都进行同步,everysec 表示对写操作进行累积,每秒同步一次。
678 # no表示等操作系统进行数据缓存同步到磁盘,都进行同步,everysec 表示对写操作进行累积,每秒同步一次
679 # appendfsync always
680 # appendfsync everysec
681 # appendfsync no
682
683 # When the AOF fsync policy is set to always or everysec, and a background
684 # saving process (a background save or AOF log background rewriting) is
685 # performing a lot of I/O against the disk, in some Linux configurations
686 # Redis may block too long on the fsync() call. Note that there is no fix for
687 # this currently, as even performing fsync in a different thread will block
688 # our synchronous write(2) call.
689 #
690 # In order to mitigate this problem it's possible to use the following option
691 # that will prevent fsync() from being called in the main process while a
692 # BGSAVE or BGREWRITEAOF is in progress.
693 #
694 # This means that while another child is saving, the durability of Redis is
695 # the same as "appendfsync none". In practical terms, this means that it is
696 # possible to lose up to 30 seconds of log in the worst scenario (with the
697 # default Linux settings).
698 #
699 # If you have latency problems turn this to "yes". Otherwise leave it as
700 # "no" that is the safest pick from the point of view of durability.
701 # 指定是否在后台aof文件rewrite期间调用fsync,默认为no,表示要调用fsync(无论后台是否有子进程在刷盘)。Redis在后台写RDB文件或重写afo文件期间会存在大量磁盘IO,此时,在某些linux系统中,调用fsync可能会阻塞。
702 no-appendfsync-on-rewrite yes
703
704 # Automatic rewrite of the append only file.
705 # Redis is able to automatically rewrite the log file implicitly calling
706 # BGREWRITEAOF when the AOF log size grows by the specified percentage.
707 #
708 # This is how it works: Redis remembers the size of the AOF file after the
709 # latest rewrite (if no rewrite has happened since the restart, the size of
710 # the AOF at startup is used).
711 #
712 # This base size is compared to the current size. If the current size is
713 # bigger than the specified percentage, the rewrite is triggered. Also
714 # you need to specify a minimal size for the AOF file to be rewritten, this
715 # is useful to avoid rewriting the AOF file even if the percentage increase
716 # is reached but it is still pretty small.
717 #
718 # Specify a percentage of zero in order to disable the automatic AOF
719 # rewrite feature.
720 # 指定Redis重写aof文件的条件,默认为100,表示与上次rewrite的aof文件大小相比,当前aof文件增长量超过上次afo文件大小的100%时,就会触发background rewrite。若配置为0,则会禁用自动rewrite
721 auto-aof-rewrite-percentage 100
722
723 # 指定触发rewrite的aof文件大小。若aof文件小于该值,即使当前文件的增量比例达到auto-aof-rewrite-percentage的配置值,也不会触发自动rewrite。即这两个配置项同时满足时,才会触发rewrite。
724 auto-aof-rewrite-min-size 64mb
725
726 # An AOF file may be found to be truncated at the end during the Redis
727 # startup process, when the AOF data gets loaded back into memory.
728 # This may happen when the system where Redis is running
729 # crashes, especially when an ext4 filesystem is mounted without the
730 # data=ordered option (however this can't happen when Redis itself
731 # crashes or aborts but the operating system still works correctly).
732 #
733 # Redis can either exit with an error when this happens, or load as much
734 # data as possible (the default now) and start if the AOF file is found
735 # to be truncated at the end. The following option controls this behavior.
736 #
737 # If aof-load-truncated is set to yes, a truncated AOF file is loaded and
738 # the Redis server starts emitting a log to inform the user of the event.
739 # Otherwise if the option is set to no, the server aborts with an error
740 # and refuses to start. When the option is set to no, the user requires
741 # to fix the AOF file using the "redis-check-aof" utility before to restart
742 # the server.
743 #
744 # Note that if the AOF file will be found to be corrupted in the middle
745 # the server will still exit with an error. This option only applies when
746 # Redis will try to read more data from the AOF file but not enough bytes
747 # will be found.
748 aof-load-truncated yes
749
750 ################################ LUA SCRIPTING ###############################
751
752 # Max execution time of a Lua script in milliseconds.
753 #
754 # If the maximum execution time is reached Redis will log that a script is
755 # still in execution after the maximum allowed time and will start to
756 # reply to queries with an error.
757 #
758 # When a long running script exceeds the maximum execution time only the
759 # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
760 # used to stop a script that did not yet called write commands. The second
761 # is the only way to shut down the server in the case a write command was
762 # already issued by the script but the user doesn't want to wait for the natural
763 # termination of the script.
764 #
765 # Set it to 0 or a negative value for unlimited execution without warnings.
766 # 一个Lua脚本最长的执行时间,单位为毫秒,如果为0或负数表示无限执行时间,默认为5000
767 lua-time-limit 5000
768
769 ################################ REDIS CLUSTER ###############################
770 #
771 # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
772 # WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
773 # in order to mark it as "mature" we need to wait for a non trivial percentage
774 # of users to deploy it in production.
775 # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
776 #
777 # Normal Redis instances can't be part of a Redis Cluster; only nodes that are
778 # started as cluster nodes can. In order to start a Redis instance as a
779 # cluster node enable the cluster support uncommenting the following:
780 #
781 # cluster-enabled yes
782
783 # Every cluster node has a cluster configuration file. This file is not
784 # intended to be edited by hand. It is created and updated by Redis nodes.
785 # Every Redis Cluster node requires a different cluster configuration file.
786 # Make sure that instances running in the same system do not have
787 # overlapping cluster configuration file names.
788 #
789 # cluster-config-file nodes-6379.conf
790
791 # Cluster node timeout is the amount of milliseconds a node must be unreachable
792 # for it to be considered in failure state.
793 # Most other internal time limits are multiple of the node timeout.
794 #
795 # cluster-node-timeout 15000
796
797 # A slave of a failing master will avoid to start a failover if its data
798 # looks too old.
799 #
800 # There is no simple way for a slave to actually have a exact measure of
801 # its "data age", so the following two checks are performed:
802 #
803 # 1) If there are multiple slaves able to failover, they exchange messages
804 # in order to try to give an advantage to the slave with the best
805 # replication offset (more data from the master processed).
806 # Slaves will try to get their rank by offset, and apply to the start
807 # of the failover a delay proportional to their rank.
808 #
809 # 2) Every single slave computes the time of the last interaction with
810 # its master. This can be the last ping or command received (if the master
811 # is still in the "connected" state), or the time that elapsed since the
812 # disconnection with the master (if the replication link is currently down).
813 # If the last interaction is too old, the slave will not try to failover
814 # at all.
815 #
816 # The point "2" can be tuned by user. Specifically a slave will not perform
817 # the failover if, since the last interaction with the master, the time
818 # elapsed is greater than:
819 #
820 # (node-timeout * slave-validity-factor) + repl-ping-slave-period
821 #
822 # So for example if node-timeout is 30 seconds, and the slave-validity-factor
823 # is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
824 # slave will not try to failover if it was not able to talk with the master
825 # for longer than 310 seconds.
826 #
827 # A large slave-validity-factor may allow slaves with too old data to failover
828 # a master, while a too small value may prevent the cluster from being able to
829 # elect a slave at all.
830 #
831 # For maximum availability, it is possible to set the slave-validity-factor
832 # to a value of 0, which means, that slaves will always try to failover the
833 # master regardless of the last time they interacted with the master.
834 # (However they'll always try to apply a delay proportional to their
835 # offset rank).
836 #
837 # Zero is the only value able to guarantee that when all the partitions heal
838 # the cluster will always be able to continue.
839 #
840 # cluster-slave-validity-factor 10
841
842 # Cluster slaves are able to migrate to orphaned masters, that are masters
843 # that are left without working slaves. This improves the cluster ability
844 # to resist to failures as otherwise an orphaned master can't be failed over
845 # in case of failure if it has no working slaves.
846 #
847 # Slaves migrate to orphaned masters only if there are still at least a
848 # given number of other working slaves for their old master. This number
849 # is the "migration barrier". A migration barrier of 1 means that a slave
850 # will migrate only if there is at least 1 other working slave for its master
851 # and so forth. It usually reflects the number of slaves you want for every
852 # master in your cluster.
853 #
854 # Default is 1 (slaves migrate only if their masters remain with at least
855 # one slave). To disable migration just set it to a very large value.
856 # A value of 0 can be set but is useful only for debugging and dangerous
857 # in production.
858 #
859 # cluster-migration-barrier 1
860
861 # By default Redis Cluster nodes stop accepting queries if they detect there
862 # is at least an hash slot uncovered (no available node is serving it).
863 # This way if the cluster is partially down (for example a range of hash slots
864 # are no longer covered) all the cluster becomes, eventually, unavailable.
865 # It automatically returns available as soon as all the slots are covered again.
866 #
867 # However sometimes you want the subset of the cluster which is working,
868 # to continue to accept queries for the part of the key space that is still
869 # covered. In order to do so, just set the cluster-require-full-coverage
870 # option to no.
871 #
872 # cluster-require-full-coverage yes
873
874 # In order to setup your cluster make sure to read the documentation
875 # available at http://redis.io web site.
876
877 ################################## SLOW LOG ###################################
878
879 # The Redis Slow Log is a system to log queries that exceeded a specified
880 # execution time. The execution time does not include the I/O operations
881 # like talking with the client, sending the reply and so forth,
882 # but just the time needed to actually execute the command (this is the only
883 # stage of command execution where the thread is blocked and can not serve
884 # other requests in the meantime).
885 #
886 # You can configure the slow log with two parameters: one tells Redis
887 # what is the execution time, in microseconds, to exceed in order for the
888 # command to get logged, and the other parameter is the length of the
889 # slow log. When a new command is logged the oldest one is removed from the
890 # queue of logged commands.
891
892 # The following time is expressed in microseconds, so 1000000 is equivalent
893 # to one second. Note that a negative number disables the slow log, while
894 # a value of zero forces the logging of every command.
895 slowlog-log-slower-than 10000
896
897 # There is no limit to this length. Just be aware that it will consume memory.
898 # You can reclaim memory used by the slow log with SLOWLOG RESET.
899 slowlog-max-len 128
900
901 ################################ LATENCY MONITOR ##############################
902
903 # The Redis latency monitoring subsystem samples different operations
904 # at runtime in order to collect data related to possible sources of
905 # latency of a Redis instance.
906 #
907 # Via the LATENCY command this information is available to the user that can
908 # print graphs and obtain reports.
909 #
910 # The system only logs operations that were performed in a time equal or
911 # greater than the amount of milliseconds specified via the
912 # latency-monitor-threshold configuration directive. When its value is set
913 # to zero, the latency monitor is turned off.
914 #
915 # By default latency monitoring is disabled since it is mostly not needed
916 # if you don't have latency issues, and collecting data has a performance
917 # impact, that while very small, can be measured under big load. Latency
918 # monitoring can easily be enabled at runtime using the command
919 # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
920 latency-monitor-threshold 0
921
922 ############################# EVENT NOTIFICATION ##############################
923
924 # Redis can notify Pub/Sub clients about events happening in the key space.
925 # This feature is documented at http://redis.io/topics/notifications
926 #
927 # For instance if keyspace events notification is enabled, and a client
928 # performs a DEL operation on key "foo" stored in the Database 0, two
929 # messages will be published via Pub/Sub:
930 #
931 # PUBLISH __keyspace@0__:foo del
932 # PUBLISH __keyevent@0__:del foo
933 #
934 # It is possible to select the events that Redis will notify among a set
935 # of classes. Every class is identified by a single character:
936 #
937 # K Keyspace events, published with __keyspace@<db>__ prefix.
938 # E Keyevent events, published with __keyevent@<db>__ prefix.
939 # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
940 # $ String commands
941 # l List commands
942 # s Set commands
943 # h Hash commands
944 # z Sorted set commands
945 # x Expired events (events generated every time a key expires)
946 # e Evicted events (events generated when a key is evicted for maxmemory)
947 # A Alias for g$lshzxe, so that the "AKE" string means all the events.
948 #
949 # The "notify-keyspace-events" takes as argument a string that is composed
950 # of zero or multiple characters. The empty string means that notifications
951 # are disabled.
952 #
953 # Example: to enable list and generic events, from the point of view of the
954 # event name, use:
955 #
956 # notify-keyspace-events Elg
957 #
958 # Example 2: to get the stream of the expired keys subscribing to channel
959 # name __keyevent@0__:expired use:
960 #
961 # notify-keyspace-events Ex
962 #
963 # By default all notifications are disabled because most users don't need
964 # this feature and the feature has some overhead. Note that if you don't
965 # specify at least one of K or E, no events will be delivered.
966 notify-keyspace-events ""
967
968 ############################### ADVANCED CONFIG ###############################
969
970 # Hashes are encoded using a memory efficient data structure when they have a
971 # small number of entries, and the biggest entry does not exceed a given
972 # threshold. These thresholds can be configured using the following directives.
973 # 当hash中包含超过指定元素个数并且最大的元素没有超过临界时,
974 # hash将以一种特殊的编码方式(大大减少内存使用)来存储,这里可以设置这两个临界值
975 hash-max-ziplist-entries 512
976 hash-max-ziplist-value 64
977
978 # Similarly to hashes, small lists are also encoded in a special way in order
979 # to save a lot of space. The special representation is only used when
980 # you are under the following limits:
981 # list数据类型多少节点以下会采用去指针的紧凑存储格式。
982 # list数据类型节点值大小小于多少字节会采用紧凑存储格式。
983 list-max-ziplist-entries 512
984 list-max-ziplist-value 64
985
986 # Sets have a special encoding in just one case: when a set is composed
987 # of just strings that happen to be integers in radix 10 in the range
988 # of 64 bit signed integers.
989 # The following configuration setting sets the limit in the size of the
990 # set in order to use this special memory saving encoding.
991 # set数据类型内部数据如果全部是数值型,且包含多少节点以下会采用紧凑格式存储。
992 set-max-intset-entries 512
993
994 # Similarly to hashes and lists, sorted sets are also specially encoded in
995 # order to save a lot of space. This encoding is only used when the length and
996 # elements of a sorted set are below the following limits:
997
998 # zsort数据类型多少节点以下会采用去指针的紧凑存储格式。
999 # zsort数据类型节点值大小小于多少字节会采用紧凑存储格式。
1000 zset-max-ziplist-entries 128
1001 zset-max-ziplist-value 64
1002
1003 # HyperLogLog sparse representation bytes limit. The limit includes the
1004 # 16 bytes header. When an HyperLogLog using the sparse representation crosses
1005 # this limit, it is converted into the dense representation.
1006 #
1007 # A value greater than 16000 is totally useless, since at that point the
1008 # dense representation is more memory efficient.
1009 #
1010 # The suggested value is ~ 3000 in order to have the benefits of
1011 # the space efficient encoding without slowing down too much PFADD,
1012 # which is O(N) with the sparse encoding. The value can be raised to
1013 # ~ 10000 when CPU is not a concern, but space is, and the data set is
1014 # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
1015 hll-sparse-max-bytes 3000
1016
1017 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
1018 # order to help rehashing the main Redis hash table (the one mapping top-level
1019 # keys to values). The hash table implementation Redis uses (see dict.c)
1020 # performs a lazy rehashing: the more operation you run into a hash table
1021 # that is rehashing, the more rehashing "steps" are performed, so if the
1022 # server is idle the rehashing is never complete and some more memory is used
1023 # by the hash table.
1024 #
1025 # The default is to use this millisecond 10 times every second in order to
1026 # actively rehash the main dictionaries, freeing memory when possible.
1027 #
1028 # If unsure:
1029 # use "activerehashing no" if you have hard latency requirements and it is
1030 # not a good thing in your environment that Redis can reply from time to time
1031 # to queries with 2 milliseconds delay.
1032 #
1033 # use "activerehashing yes" if you don't have such hard requirements but
1034 # want to free memory asap when possible.
1035
1036 # Redis将在每100毫秒时使用1毫秒的CPU时间来对redis的hash表进行重新hash,可以降低内存的使用
1037 # 当你的使用场景中,有非常严格的实时性需要,不能够接受Redis时不时的对请求有2毫秒的延迟的话,把这项配置为no。
1038 # 如果没有这么严格的实时性要求,可以设置为yes,以便能够尽可能快的释放内存
1039 activerehashing yes
1040
1041 # The client output buffer limits can be used to force disconnection of clients
1042 # that are not reading data from the server fast enough for some reason (a
1043 # common reason is that a Pub/Sub client can't consume messages as fast as the
1044 # publisher can produce them).
1045 #
1046 # The limit can be set differently for the three different classes of clients:
1047 #
1048 # normal -> normal clients including MONITOR clients
1049 # slave -> slave clients
1050 # pubsub -> clients subscribed to at least one pubsub channel or pattern
1051 #
1052 # The syntax of every client-output-buffer-limit directive is the following:
1053 #
1054 # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
1055 #
1056 # A client is immediately disconnected once the hard limit is reached, or if
1057 # the soft limit is reached and remains reached for the specified number of
1058 # seconds (continuously).
1059 # So for instance if the hard limit is 32 megabytes and the soft limit is
1060 # 16 megabytes / 10 seconds, the client will get disconnected immediately
1061 # if the size of the output buffers reach 32 megabytes, but will also get
1062 # disconnected if the client reaches 16 megabytes and continuously overcomes
1063 # the limit for 10 seconds.
1064 #
1065 # By default normal clients are not limited because they don't receive data
1066 # without asking (in a push way), but just after a request, so only
1067 # asynchronous clients may create a scenario where data is requested faster
1068 # than it can read.
1069 #
1070 # Instead there is a default limit for pubsub and slave clients, since
1071 # subscribers and slaves receive data in a push fashion.
1072 #
1073 # Both the hard or the soft limit can be disabled by setting them to zero.
1074 client-output-buffer-limit normal 0 0 0
1075 client-output-buffer-limit slave 256mb 64mb 60
1076 client-output-buffer-limit pubsub 32mb 8mb 60
1077
1078 # Redis calls an internal function to perform many background tasks, like
1079 # closing connections of clients in timeout, purging expired keys that are
1080 # never requested, and so forth.
1081 #
1082 # Not all tasks are performed with the same frequency, but Redis checks for
1083 # tasks to perform according to the specified "hz" value.
1084 #
1085 # By default "hz" is set to 10. Raising the value will use more CPU when
1086 # Redis is idle, but at the same time will make Redis more responsive when
1087 # there are many keys expiring at the same time, and timeouts may be
1088 # handled with more precision.
1089 #
1090 # The range is between 1 and 500, however a value over 100 is usually not
1091 # a good idea. Most users should use the default of 10 and raise this up to
1092 # 100 only in environments where very low latency is required.
1093 hz 10
1094
1095 # When a child rewrites the AOF file, if the following option is enabled
1096 # the file will be fsync-ed every 32 MB of data generated. This is useful
1097 # in order to commit the file to the disk more incrementally and avoid
1098 # big latency spikes.
1099 # aof rewrite过程中,是否采取增量文件同步策略,默认为“yes”。 rewrite过程中,每32M数据进行一次文件同步,这样可以减少aof大文件写入对磁盘的操作次数
1100 aof-rewrite-incremental-fsync yes
1101
1102
1103 # redis数据存储
1104 redis的存储分为内存存储、磁盘存储和log文件三部分,配置文件中有三个参数对其进行配置。
1105 save seconds updates,save配置,指出在多长时间内,有多少次更新操作,就将数据同步到数据文件。可多个条件配合,默认配置了三个条件。
1106 appendonly yes/no ,appendonly配置,指出是否在每次更新操作后进行日志记录,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为redis本身同步数据文件是按上面的save条件来同步的,所以有的数据会在一段时间内只存在于内存中。
1107 appendfsync no/always/everysec ,appendfsync配置,no表示等操作系统进行数据缓存同步到磁盘,always表示每次更新操作后手动调用fsync()将数据写到磁盘,everysec表示每秒同步一次。
##redis配置详解
# Redis configuration file example.## Note that in order to read the configuration file, Redis must be# started with the file path as first argument:## ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify# it in the usual form of 1k 5GB 4M and so forth:## 1k => 1000 bytes# 1kb => 1024 bytes# 1m => 1000000 bytes# 1mb => 1024*1024 bytes# 1g => 1000000000 bytes# 1gb => 1024*1024*1024 bytes## units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ##################################################################### 包含 ###################################
# Include one or more other config files here. This is useful if you# have a standard template that goes to all Redis servers but also need# to customize a few per-server settings. Include files can include# other files, so use this wisely.## Notice option "include" won't be rewritten by command "CONFIG REWRITE"# from admin or Redis Sentinel. Since Redis always uses the last processed# line as value of a configuration directive, you'd better put includes# at the beginning of this file to avoid overwriting config change at runtime.## If instead you are interested in using includes to override configuration# options, it is better to use include as the last line.## 假如说你有一个可用于所有的 redis server 的标准配置模板,# 但针对某些 server 又需要一些个性化的设置,# 你可以使用 include 来包含一些其他的配置文件,这对你来说是非常有用的。## 但是要注意哦,include 是不能被 config rewrite 命令改写的# 由于 redis 总是以最后的加工线作为一个配置指令值,所以你最好是把 include 放在这个文件的最前面,# 以避免在运行时覆盖配置的改变,相反,你就把它放在后面# include /path/to/local.conf# include /path/to/other.conf
################################ GENERAL ##################################################################### 常用 #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.# 默认情况下 redis 不是作为守护进程运行的,如果你想让它在后台运行,你就把它改成 yes。# 当redis作为守护进程运行的时候,它会写一个 pid 到 /var/run/redis.pid 文件里面。daemonize yes
# When running daemonized, Redis writes a pid file in /var/run/redis.pid by# default. You can specify a custom pid file location here.# 当 Redis 以守护进程的方式运行的时候,Redis 默认会把 pid 文件放在/var/run/redis.pid# 可配置到其他地址,当运行多个 redis 服务时,需要指定不同的 pid 文件和端口# 指定存储Redis进程号的文件路径pidfile /var/run/redis.pid
# Accept connections on the specified port, default is 6379.# If port 0 is specified Redis will not listen on a TCP socket.# 端口,默认端口是6379,生产环境中建议更改端口号,安全性更高# 如果你设为 0 ,redis 将不在 socket 上监听任何客户端连接。port 9966
# TCP listen() backlog.## In high requests-per-second environments you need an high backlog in order# to avoid slow clients connections issues. Note that the Linux kernel# will silently truncate it to the value of /proc/sys/net/core/somaxconn so# make sure to raise both the value of somaxconn and tcp_max_syn_backlog# in order to get the desired effect.# TCP 监听的最大容纳数量# 此参数确定了TCP连接中已完成队列(完成三次握手之后)的长度,# 当系统并发量大并且客户端速度缓慢的时候,你需要把这个值调高以避免客户端连接缓慢的问题。# Linux 内核会一声不响的把这个值缩小成 /proc/sys/net/core/somaxconn 对应的值,默认是511,而Linux的默认参数值是128。# 所以可以将这二个参数一起参考设定,你以便达到你的预期。# tcp-backlog 511
# By default Redis listens for connections from all the network interfaces# available on the server. It is possible to listen to just one or multiple# interfaces using the "bind" configuration directive, followed by one or# more IP addresses.## Examples:## bind 192.168.1.100 10.0.0.1# 有时候为了安全起见,redis一般都是监听127.0.0.1 但是有时候又有同网段能连接的需求,当然可以绑定0.0.0.0 用iptables来控制访问权限,或者设置redis访问密码来保证数据安全
# 不设置将处理所有请求,建议生产环境中设置,有个误区:bind是用来限制外网IP访问的,其实不是,限制外网ip访问可以通过iptables;如:-A INPUT -s 10.10.1.0/24 -p tcp -m state --state NEW -m tcp --dport 9966 -j ACCEPT ;# 实际上,bind ip 绑定的是redis所在服务器网卡的ip,当然127.0.0.1也是可以的#如果绑定一个外网ip,就会报错:Creating Server TCP listening socket xxx.xxx.xxx.xxx:9966: bind: Cannot assign requested address
# bind 127.0.0.1bind 127.0.0.1 10.10.1.3
# 假设绑定是以上ip,使用 netstat -anp|grep 9966 会发现,这两个ip被bind,其中10.10.1.3是服务器网卡的ip# tcp 0 0 10.10.1.3:9966 0.0.0.0:* LISTEN 11188/redis-server # tcp 0 0 127.0.0.1:9966 0.0.0.0:* LISTEN 11188/redis-server
# Specify the path for the Unix socket that will be used to listen for# incoming connections. There is no default, so Redis will not listen# on a unix socket when not specified.## unixsocket /tmp/redis.sock# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)# 客户端和Redis服务端的连接超时时间,默认是0,表示永不超时。timeout 0
# TCP keepalive.## If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence# of communication. This is useful for two reasons:## 1) Detect dead peers.# 2) Take the connection alive from the point of view of network# equipment in the middle.## On Linux, the specified value (in seconds) is the period used to send ACKs.# Note that to close the connection the double of the time is needed.# On other kernels the period depends on the kernel configuration.## A reasonable value for this option is 60 seconds.
# tcp 心跳包。## 如果设置为非零,则在与客户端缺乏通讯的时候使用 SO_KEEPALIVE 发送 tcp acks 给客户端。# 这个之所有有用,主要由两个原因:## 1) 防止死的 peers# 2) Take the connection alive from the point of view of network# equipment in the middle.## 推荐一个合理的值就是60秒tcp-keepalive 0
# Specify the server verbosity level.# This can be one of:# debug (a lot of information, useful for development/testing)# verbose (many rarely useful info, but not a mess like the debug level)# notice (moderately verbose, what you want in production probably)# warning (only very important / critical messages are logged)# 日志记录等级,4个可选值debug,verbose,notice,warning# 可以是下面的这些值:# debug (适用于开发或测试阶段)# verbose (many rarely useful info, but not a mess like the debug level)# notice (适用于生产环境)# warning (仅仅一些重要的消息被记录)loglevel notice
# Specify the log file name. Also the empty string can be used to force# Redis to log on the standard output. Note that if you use standard# output for logging but daemonize, logs will be sent to /dev/null#配置 log 文件地址,默认打印在命令行终端的窗口上,也可设为/dev/null屏蔽日志、logfile "/data/logs/redis/redis.log"
# To enable logging to the system logger, just set 'syslog-enabled' to yes,# and optionally update the other syslog parameters to suit your needs.# 要想把日志记录到系统日志,就把它改成 yes,# 也可以可选择性的更新其他的syslog 参数以达到你的要求# syslog-enabled no
# Specify the syslog identity.# 设置 syslog 的 identity。# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select# a different one on a per-connection basis using SELECT <dbid> where# dbid is a number between 0 and 'databases'-1# 可用的数据库数,默认值为16,默认数据库为0,数据库范围在0-(database-1)之间databases 16
################################ SNAPSHOTTING ################################################################ 快照 ################################## Save the DB on disk:## save <seconds> <changes>## Will save the DB if both the given number of seconds and the given# number of write operations against the DB occurred.## In the example below the behaviour will be to save:# after 900 sec (15 min) if at least 1 key changed# after 300 sec (5 min) if at least 10 keys changed# after 60 sec if at least 10000 keys changed## Note: you can disable saving completely by commenting out all "save" lines.## It is also possible to remove all the previously configured save# points by adding a save directive with a single empty string argument# like in the following example:## save ""# 在 900 秒内最少有 1 个 key 被改动,或者 300 秒内最少有 10 个 key 被改动,又或者 60 秒内最少有 1000 个 key 被改动,以上三个条件随便满足一个,就触发一次保存操作。
# if(在60秒之内有10000个keys发生变化时){# 进行镜像备份# }else if(在300秒之内有10个keys发生了变化){# 进行镜像备份# }else if(在900秒之内有1个keys发生了变化){# 进行镜像备份# }
save 900 1save 300 10save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled# (at least one save point) and the latest background save failed.# This will make the user aware (in a hard way) that data is not persisting# on disk properly, otherwise chances are that no one will notice and some#:/ disaster will happen.## If the background saving process will start working again Redis will# automatically allow writes again.## However if you have setup your proper monitoring of the Redis server# and persistence, you may want to disable this feature so that Redis will# continue to work as usual even if there are problems with disk,# permissions, and so forth.# 默认情况下,如果 redis 最后一次的后台保存失败,redis 将停止接受写操作,# 这样以一种强硬的方式让用户知道数据不能正确的持久化到磁盘,# 否则就会没人注意到灾难的发生。## 如果后台保存进程重新启动工作了,redis 也将自动的允许写操作。## 然而你要是安装了靠谱的监控,你可能不希望 redis 这样做,那你就改成 no 好stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?# For default that's set to 'yes' as it's almost always a win.# If you want to save some CPU in the saving child set it to 'no' but# the dataset will likely be bigger if you have compressible values or keys.# 在进行备份时,是否进行压缩# 是否在 dump .rdb 数据库的时候使用 LZF 压缩字符串# 默认都设为 yes# 如果你希望保存子进程节省点 cpu ,你就设置它为 no ,# 不过这个数据集可能就会比较大rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.# This makes the format more resistant to corruption but there is a performance# hit to pay (around 10%) when saving and loading RDB files, so you can disable it# for maximum performances.## RDB files created with checksum disabled have a checksum of zero that will# tell the loading code to skip the check. # 读取和写入的时候是否支持CRC64校验,默认是开启的rdbchecksum yes
# The filename where to dump the DB# 备份文件的文件名dbfilename dump.rdb
# The working directory.## The DB will be written inside this directory, with the filename specified# above using the 'dbfilename' configuration directive.## The Append Only File will also be created inside this directory.## Note that you must specify a directory here, not a file name.# 数据库备份的文件放置的路径# 路径跟文件名分开配置是因为 Redis 备份时,先会将当前数据库的状态写入到一个临时文件# 等备份完成时,再把该临时文件替换为上面所指定的文件# 而临时文件和上面所配置的备份文件都会放在这个指定的路径当中# 默认值为 ./dir /data/data/redis/
################################# REPLICATION ################################################################## 主从复制 ################################## Master-Slave replication. Use slaveof to make a Redis instance a copy of# another Redis server. A few things to understand ASAP about Redis replication.## 1) Redis replication is asynchronous, but you can configure a master to# stop accepting writes if it appears to be not connected with at least# a given number of slaves.# 2) Redis slaves are able to perform a partial resynchronization with the# master if the replication link is lost for a relatively small amount of# time. You may want to configure the replication backlog size (see the next# sections of this file) with a sensible value depending on your needs.# 3) Replication is automatic and does not need user intervention. After a# network partition slaves automatically try to reconnect to masters# and resynchronize with them.## 设置该数据库为其他数据库的从数据库# slaveof <masterip> <masterport> 当本机为从服务时,设置主服务的IP及端口# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration# directive below) it is possible to tell the slave to authenticate before# starting the replication synchronization process, otherwise the master will# refuse the slave request.## 指定与主数据库连接时需要的密码验证# masterauth <master-password> 当本机为从服务时,设置访问master服务器的密码# masterauth <master-password>
# When a slave loses its connection with the master, or when the replication# is still in progress, the slave can act in two different ways:## 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will# still reply to client requests, possibly with out of date data, or the# data set may just be empty if this is the first synchronization.## 2) if slave-serve-stale-data is set to 'no' the slave will reply with# an error "SYNC with master in progress" to all the kind of commands# but to INFO and SLAVEOF.## 当slave服务器和master服务器失去连接后,或者当数据正在复制传输的时候,如果此参数值设置“yes”,slave服务器可以继续接受客户端的请求,否则,会返回给请求的客户端如下信息“SYNC with master in progress”,除了INFO,SLAVEOF这两个命令slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against# a slave instance may be useful to store some ephemeral data (because data# written on a slave will be easily deleted after resync with the master) but# may also cause problems if clients are writing to it because of a# misconfiguration.## Since Redis 2.6 by default slaves are read-only.## Note: read only slaves are not designed to be exposed to untrusted clients# on the internet. It's just a protection layer against misuse of the instance.# Still a read only slave exports by default all the administrative commands# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve# security of read only slaves using 'rename-command' to shadow all the# administrative / dangerous commands.# 是否允许slave服务器节点只提供读服务slave-read-only yes
# Replication SYNC strategy: disk or socket.## -------------------------------------------------------# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY# -------------------------------------------------------## New slaves and reconnecting slaves that are not able to continue the replication# process just receiving differences, need to do what is called a "full# synchronization". An RDB file is transmitted from the master to the slaves.# The transmission can happen in two different ways:## 1) Disk-backed: The Redis master creates a new process that writes the RDB# file on disk. Later the file is transferred by the parent# process to the slaves incrementally.# 2) Diskless: The Redis master creates a new process that directly writes the# RDB file to slave sockets, without touching the disk at all.## With disk-backed replication, while the RDB file is generated, more slaves# can be queued and served with the RDB file as soon as the current child producing# the RDB file finishes its work. With diskless replication instead once# the transfer starts, new slaves arriving will be queued and a new transfer# will start when the current one terminates.## When diskless replication is used, the master waits a configurable amount of# time (in seconds) before starting the transfer in the hope that multiple slaves# will arrive and the transfer can be parallelized.## With slow disks and fast (large bandwidth) networks, diskless replication# works better.repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay# the server waits in order to spawn the child that transfers the RDB via socket# to the slaves.## This is important since once the transfer starts, it is not possible to serve# new slaves arriving, that will be queued for the next RDB transfer, so the server# waits a delay in order to let more slaves arrive.## The delay is specified in seconds, and by default is 5 seconds. To disable# it entirely just set it to 0 seconds and the transfer will start ASAP.repl-diskless-sync-delay 5
# Slaves send PINGs to server in a predefined interval. It's possible to change# this interval with the repl_ping_slave_period option. The default value is 10# seconds.## Slaves 在一个预定义的时间间隔内发送 ping 命令到 server 。# 你可以改变这个时间间隔。默认为 10 秒。# repl-ping-slave-period 10
# The following option sets the replication timeout for:## 1) Bulk transfer I/O during SYNC, from the point of view of slave.# 2) Master timeout from the point of view of slaves (data, pings).# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).## It is important to make sure that this value is greater than the value# specified for repl-ping-slave-period otherwise a timeout will be detected# every time there is low traffic between the master and the slave.## 设置主从复制过期时间# 这个值一定要比 repl-ping-slave-period 大# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?## If you select "yes" Redis will use a smaller number of TCP packets and# less bandwidth to send data to slaves. But this can add a delay for# the data to appear on the slave side, up to 40 milliseconds with# Linux kernels using a default configuration.## If you select "no" the delay for data to appear on the slave side will# be reduced but more bandwidth will be used for replication.## By default we optimize for low latency, but in very high traffic conditions# or when the master and slaves are many hops away, turning this to "yes" may# be a good idea.# 指定向slave同步数据时,是否禁用socket的NO_DELAY选 项。若配置为“yes”,则禁用NO_DELAY,则TCP协议栈会合并小包统一发送,这样可以减少主从节点间的包数量并节省带宽,但会增加数据同步到 slave的时间。若配置为“no”,表明启用NO_DELAY,则TCP协议栈不会延迟小包的发送时机,这样数据同步的延时会减少,但需要更大的带宽。 通常情况下,应该配置为no以降低同步延时,但在主从节点间网络负载已经很高的情况下,可以配置为yes。repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates# slave data when slaves are disconnected for some time, so that when a slave# wants to reconnect again, often a full resync is not needed, but a partial# resync is enough, just passing the portion of data the slave missed while# disconnected.## The bigger the replication backlog, the longer the time the slave can be# disconnected and later be able to perform a partial resynchronization.## The backlog is only allocated once there is at least a slave connected.## 设置主从复制容量大小。这个 backlog 是一个用来在 slaves 被断开连接时# 存放 slave 数据的 buffer,所以当一个 slave 想要重新连接,通常不希望全部重新同步,# 只是部分同步就够了,仅仅传递 slave 在断开连接时丢失的这部分数据。## The biggest the replication backlog, the longer the time the slave can be# disconnected and later be able to perform a partial resynchronization.# 这个值越大,salve 可以断开连接的时间就越长。
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog# will be freed. The following option configures the amount of seconds that# need to elapse, starting from the time the last slave disconnected, for# the backlog buffer to be freed.## A value of 0 means to never release the backlog.## 在某些时候,master 不再连接 slaves,backlog 将被释放。# 如果设置为 0 ,意味着绝不释放 backlog 。# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.# It is used by Redis Sentinel in order to select a slave to promote into a# master if the master is no longer working correctly.## A slave with a low priority number is considered better for promotion, so# for instance if there are three slaves with priority 10, 100, 25 Sentinel will# pick the one with priority 10, that is the lowest.## However a special priority of 0 marks the slave as not able to perform the# role of master, so a slave with priority of 0 will never be selected by# Redis Sentinel for promotion.## By default the priority is 100.# 指定slave的优先级。在不只1个slave存在的部署环境下,当master宕机时,Redis# Sentinel会将priority值最小的slave提升为master。# 这个值越小,就越会被优先选中,需要注意的是,# 若该配置项为0,则对应的slave永远不会自动提升为master。slave-priority 100
# It is possible for a master to stop accepting writes if there are less than# N slaves connected, having a lag less or equal than M seconds.## The N slaves need to be in "online" state.## The lag in seconds, that must be <= the specified value, is calculated from# the last ping received from the slave, that is usually sent every second.## This option does not GUARANTEE that N replicas will accept the write, but# will limit the window of exposure for lost writes in case not enough slaves# are available, to the specified number of seconds## For example to require at least 3 slaves with a lag <= 10 seconds use:## min-slaves-to-write 3# min-slaves-max-lag 10## Setting one or the other to 0 disables the feature.## By default min-slaves-to-write is set to 0 (feature disabled) and# min-slaves-max-lag is set to 10.
################################## SECURITY ##################################################################### 安全 ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other# commands. This might be useful in environments in which you do not trust# others with access to the host running redis-server.## This should stay commented out for backward compatibility and because most# people do not need auth (e.g. they run their own servers).## Warning: since Redis is pretty fast an outside user can try up to# 150k passwords per second against a good box. This means that you should# use a very strong password otherwise it will be very easy to break.## 设置连接redis的密码# redis速度相当快,一个外部用户在一秒钟进行150K次密码尝试,需指定强大的密码来防止暴力破解requirepass set_enough_strong_passwd
# Command renaming.## It is possible to change the name of dangerous commands in a shared# environment. For instance the CONFIG command may be renamed into something# hard to guess so that it will still be available for internal-use tools# but not available for general clients.## Example:## rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52## It is also possible to completely kill a command by renaming it into# an empty string:## rename-command CONFIG ""## Please note that changing the name of commands that are logged into the# AOF file or transmitted to slaves may cause problems.# 重命名一些高危命令,用来禁止高危命令rename-command FLUSHALL ZYzv6FOBdwflW2nXrename-command CONFIG aI7zwm1GDzMMrEirename-command EVAL S9UHPKEpSvUJMMrename-command FLUSHDB D60FPVDJuip7gy6l
################################### LIMITS ####################################################################### 限制 ####################################
# Set the max number of connected clients at the same time. By default# this limit is set to 10000 clients, however if the Redis server is not# able to configure the process file limit to allow for the specified limit# the max number of allowed clients is set to the current file limit# minus 32 (as Redis reserves a few file descriptors for internal uses).## Once the limit is reached Redis will close all the new connections sending# an error 'max number of clients reached'.## 限制同时连接的客户数量,默认是10000# 当连接数超过这个值时,redis 将不再接收其他连接请求,客户端尝试连接时将收到 error 信息# maxclients 10000
# Don't use more memory than the specified amount of bytes.# When the memory limit is reached Redis will try to remove keys# according to the eviction policy selected (see maxmemory-policy).## If Redis can't remove keys according to the policy, or if the policy is# set to 'noeviction', Redis will start to reply with errors to commands# that would use more memory, like SET, LPUSH, and so on, and will continue# to reply to read-only commands like GET.## This option is usually useful when using Redis as an LRU cache, or to set# a hard memory limit for an instance (using the 'noeviction' policy).## WARNING: If you have slaves attached to an instance with maxmemory on,# the size of the output buffers needed to feed the slaves are subtracted# from the used memory count, so that network problems / resyncs will# not trigger a loop where keys are evicted, and in turn the output# buffer of slaves is full with DELs of keys evicted triggering the deletion# of more keys, and so forth until the database is completely emptied.## In short... if you have slaves attached it is suggested that you set a lower# limit for maxmemory so that there is some free RAM on the system for slave# output buffers (but this is not needed if the policy is 'noeviction').## 设置redis能够使用的最大内存。# 达到最大内存设置后,Redis会先尝试清除已到期或即将到期的Key(设置过expire信息的key)# 在删除时,按照过期时间进行删除,最早将要被过期的key将最先被删除# 如果已到期或即将到期的key删光,仍进行set操作,那么将返回错误# 此时redis将不再接收写请求,只接收get请求。# maxmemory的设置比较适合于把redis当作于类似memcached 的缓存来使用# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory# is reached. You can select among five behaviors:## volatile-lru -> remove the key with an expire set using an LRU algorithm# allkeys-lru -> remove any key according to the LRU algorithm# volatile-random -> remove a random key with an expire set# allkeys-random -> remove a random key, any key# volatile-ttl -> remove the key with the nearest expire time (minor TTL)# noeviction -> don't expire at all, just return an error on write operations## Note: with any of the above policies, Redis will return an error on write# operations, when there are no suitable keys for eviction.## At the date of writing these commands are: set setnx setex append# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby# getset mset msetnx exec sort## The default is:## maxmemory-policy noeviction
# LRU and minimal TTL algorithms are not precise algorithms but approximated# algorithms (in order to save memory), so you can tune it for speed or# accuracy. For default Redis will check five keys and pick the one that was# used less recently, you can change the sample size using the following# configuration directive.## The default of 5 produces good enough results. 10 Approximates very closely# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.## maxmemory-samples 5
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is# good enough in many applications, but an issue with the Redis process or# a power outage may result into a few minutes of writes lost (depending on# the configured save points).## The Append Only File is an alternative persistence mode that provides# much better durability. For instance using the default data fsync policy# (see later in the config file) Redis can lose just one second of writes in a# dramatic event like a server power outage, or a single write if something# wrong with the Redis process itself happens, but the operating system is# still running correctly.## AOF and RDB persistence can be enabled at the same time without problems.# If the AOF is enabled on startup Redis will load the AOF, that is the file# with the better durability guarantees.## Please check http://redis.io/topics/persistence for more information.
# redis 默认每次更新操作后会在后台异步的把数据库镜像备份到磁盘,但该备份非常耗时,且备份不宜太频繁# redis 同步数据文件是按上面save条件来同步的# 如果发生诸如拉闸限电、拔插头等状况,那么将造成比较大范围的数据丢失# 所以redis提供了另外一种更加高效的数据库备份及灾难恢复方式# 开启append only 模式后,redis 将每一次写操作请求都追加到appendonly.aof 文件中# redis重新启动时,会从该文件恢复出之前的状态。# 但可能会造成 appendonly.aof 文件过大,所以redis支持BGREWRITEAOF 指令,对appendonly.aof重新整理,默认是不开启的。
appendonly no
# The name of the append only file (default: "appendonly.aof")# 默认为appendonly.aof。appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk# instead of waiting for more data in the output buffer. Some OS will really flush# data on disk, some other OS will just try to do it ASAP.## Redis supports three different modes:## no: don't fsync, just let the OS flush the data when it wants. Faster.# always: fsync after every write to the append only log. Slow, Safest.# everysec: fsync only one time every second. Compromise.## The default is "everysec", as that's usually the right compromise between# speed and data safety. It's up to you to understand if you can relax this to# "no" that will let the operating system flush the output buffer when# it wants, for better performances (but if you can live with the idea of# some data loss consider the default persistence mode that's snapshotting),# or on the contrary, use "always" that's very slow but a bit safer than# everysec.## More details please check the following article:# http://antirez.com/post/redis-persistence-demystified.html## If unsure, use "everysec".
# 设置对 appendonly.aof 文件进行同步的频率,有三种选择always、everysec、no,默认是everysec表示每秒同步一次。# always 表示每次有写操作都进行同步,everysec 表示对写操作进行累积,每秒同步一次。# no表示等操作系统进行数据缓存同步到磁盘,都进行同步,everysec 表示对写操作进行累积,每秒同步一次# appendfsync always# appendfsync everysec# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background# saving process (a background save or AOF log background rewriting) is# performing a lot of I/O against the disk, in some Linux configurations# Redis may block too long on the fsync() call. Note that there is no fix for# this currently, as even performing fsync in a different thread will block# our synchronous write(2) call.## In order to mitigate this problem it's possible to use the following option# that will prevent fsync() from being called in the main process while a# BGSAVE or BGREWRITEAOF is in progress.## This means that while another child is saving, the durability of Redis is# the same as "appendfsync none". In practical terms, this means that it is# possible to lose up to 30 seconds of log in the worst scenario (with the# default Linux settings).## If you have latency problems turn this to "yes". Otherwise leave it as# "no" that is the safest pick from the point of view of durability.# 指定是否在后台aof文件rewrite期间调用fsync,默认为no,表示要调用fsync(无论后台是否有子进程在刷盘)。Redis在后台写RDB文件或重写afo文件期间会存在大量磁盘IO,此时,在某些linux系统中,调用fsync可能会阻塞。no-appendfsync-on-rewrite yes
# Automatic rewrite of the append only file.# Redis is able to automatically rewrite the log file implicitly calling# BGREWRITEAOF when the AOF log size grows by the specified percentage.## This is how it works: Redis remembers the size of the AOF file after the# latest rewrite (if no rewrite has happened since the restart, the size of# the AOF at startup is used).## This base size is compared to the current size. If the current size is# bigger than the specified percentage, the rewrite is triggered. Also# you need to specify a minimal size for the AOF file to be rewritten, this# is useful to avoid rewriting the AOF file even if the percentage increase# is reached but it is still pretty small.## Specify a percentage of zero in order to disable the automatic AOF# rewrite feature.# 指定Redis重写aof文件的条件,默认为100,表示与上次rewrite的aof文件大小相比,当前aof文件增长量超过上次afo文件大小的100%时,就会触发background rewrite。若配置为0,则会禁用自动rewriteauto-aof-rewrite-percentage 100
# 指定触发rewrite的aof文件大小。若aof文件小于该值,即使当前文件的增量比例达到auto-aof-rewrite-percentage的配置值,也不会触发自动rewrite。即这两个配置项同时满足时,才会触发rewrite。auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis# startup process, when the AOF data gets loaded back into memory.# This may happen when the system where Redis is running# crashes, especially when an ext4 filesystem is mounted without the# data=ordered option (however this can't happen when Redis itself# crashes or aborts but the operating system still works correctly).## Redis can either exit with an error when this happens, or load as much# data as possible (the default now) and start if the AOF file is found# to be truncated at the end. The following option controls this behavior.## If aof-load-truncated is set to yes, a truncated AOF file is loaded and# the Redis server starts emitting a log to inform the user of the event.# Otherwise if the option is set to no, the server aborts with an error# and refuses to start. When the option is set to no, the user requires# to fix the AOF file using the "redis-check-aof" utility before to restart# the server.## Note that if the AOF file will be found to be corrupted in the middle# the server will still exit with an error. This option only applies when# Redis will try to read more data from the AOF file but not enough bytes# will be found.aof-load-truncated yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.## If the maximum execution time is reached Redis will log that a script is# still in execution after the maximum allowed time and will start to# reply to queries with an error.## When a long running script exceeds the maximum execution time only the# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be# used to stop a script that did not yet called write commands. The second# is the only way to shut down the server in the case a write command was# already issued by the script but the user doesn't want to wait for the natural# termination of the script.## Set it to 0 or a negative value for unlimited execution without warnings.# 一个Lua脚本最长的执行时间,单位为毫秒,如果为0或负数表示无限执行时间,默认为5000lua-time-limit 5000
################################ REDIS CLUSTER ################################# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however# in order to mark it as "mature" we need to wait for a non trivial percentage# of users to deploy it in production.# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++## Normal Redis instances can't be part of a Redis Cluster; only nodes that are# started as cluster nodes can. In order to start a Redis instance as a# cluster node enable the cluster support uncommenting the following:## cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not# intended to be edited by hand. It is created and updated by Redis nodes.# Every Redis Cluster node requires a different cluster configuration file.# Make sure that instances running in the same system do not have# overlapping cluster configuration file names.## cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable# for it to be considered in failure state.# Most other internal time limits are multiple of the node timeout.## cluster-node-timeout 15000
# A slave of a failing master will avoid to start a failover if its data# looks too old.## There is no simple way for a slave to actually have a exact measure of# its "data age", so the following two checks are performed:## 1) If there are multiple slaves able to failover, they exchange messages# in order to try to give an advantage to the slave with the best# replication offset (more data from the master processed).# Slaves will try to get their rank by offset, and apply to the start# of the failover a delay proportional to their rank.## 2) Every single slave computes the time of the last interaction with# its master. This can be the last ping or command received (if the master# is still in the "connected" state), or the time that elapsed since the# disconnection with the master (if the replication link is currently down).# If the last interaction is too old, the slave will not try to failover# at all.## The point "2" can be tuned by user. Specifically a slave will not perform# the failover if, since the last interaction with the master, the time# elapsed is greater than:## (node-timeout * slave-validity-factor) + repl-ping-slave-period## So for example if node-timeout is 30 seconds, and the slave-validity-factor# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the# slave will not try to failover if it was not able to talk with the master# for longer than 310 seconds.## A large slave-validity-factor may allow slaves with too old data to failover# a master, while a too small value may prevent the cluster from being able to# elect a slave at all.## For maximum availability, it is possible to set the slave-validity-factor# to a value of 0, which means, that slaves will always try to failover the# master regardless of the last time they interacted with the master.# (However they'll always try to apply a delay proportional to their# offset rank).## Zero is the only value able to guarantee that when all the partitions heal# the cluster will always be able to continue.## cluster-slave-validity-factor 10
# Cluster slaves are able to migrate to orphaned masters, that are masters# that are left without working slaves. This improves the cluster ability# to resist to failures as otherwise an orphaned master can't be failed over# in case of failure if it has no working slaves.## Slaves migrate to orphaned masters only if there are still at least a# given number of other working slaves for their old master. This number# is the "migration barrier". A migration barrier of 1 means that a slave# will migrate only if there is at least 1 other working slave for its master# and so forth. It usually reflects the number of slaves you want for every# master in your cluster.## Default is 1 (slaves migrate only if their masters remain with at least# one slave). To disable migration just set it to a very large value.# A value of 0 can be set but is useful only for debugging and dangerous# in production.## cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there# is at least an hash slot uncovered (no available node is serving it).# This way if the cluster is partially down (for example a range of hash slots# are no longer covered) all the cluster becomes, eventually, unavailable.# It automatically returns available as soon as all the slots are covered again.## However sometimes you want the subset of the cluster which is working,# to continue to accept queries for the part of the key space that is still# covered. In order to do so, just set the cluster-require-full-coverage# option to no.## cluster-require-full-coverage yes
# In order to setup your cluster make sure to read the documentation# available at http://redis.io web site.
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified# execution time. The execution time does not include the I/O operations# like talking with the client, sending the reply and so forth,# but just the time needed to actually execute the command (this is the only# stage of command execution where the thread is blocked and can not serve# other requests in the meantime).## You can configure the slow log with two parameters: one tells Redis# what is the execution time, in microseconds, to exceed in order for the# command to get logged, and the other parameter is the length of the# slow log. When a new command is logged the oldest one is removed from the# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent# to one second. Note that a negative number disables the slow log, while# a value of zero forces the logging of every command.slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.# You can reclaim memory used by the slow log with SLOWLOG RESET.slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations# at runtime in order to collect data related to possible sources of# latency of a Redis instance.## Via the LATENCY command this information is available to the user that can# print graphs and obtain reports.## The system only logs operations that were performed in a time equal or# greater than the amount of milliseconds specified via the# latency-monitor-threshold configuration directive. When its value is set# to zero, the latency monitor is turned off.## By default latency monitoring is disabled since it is mostly not needed# if you don't have latency issues, and collecting data has a performance# impact, that while very small, can be measured under big load. Latency# monitoring can easily be enabled at runtime using the command# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.# This feature is documented at http://redis.io/topics/notifications## For instance if keyspace events notification is enabled, and a client# performs a DEL operation on key "foo" stored in the Database 0, two# messages will be published via Pub/Sub:## PUBLISH __keyspace@0__:foo del# PUBLISH __keyevent@0__:del foo## It is possible to select the events that Redis will notify among a set# of classes. Every class is identified by a single character:## K Keyspace events, published with __keyspace@<db>__ prefix.# E Keyevent events, published with __keyevent@<db>__ prefix.# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...# $ String commands# l List commands# s Set commands# h Hash commands# z Sorted set commands# x Expired events (events generated every time a key expires)# e Evicted events (events generated when a key is evicted for maxmemory)# A Alias for g$lshzxe, so that the "AKE" string means all the events.## The "notify-keyspace-events" takes as argument a string that is composed# of zero or multiple characters. The empty string means that notifications# are disabled.## Example: to enable list and generic events, from the point of view of the# event name, use:## notify-keyspace-events Elg## Example 2: to get the stream of the expired keys subscribing to channel# name __keyevent@0__:expired use:## notify-keyspace-events Ex## By default all notifications are disabled because most users don't need# this feature and the feature has some overhead. Note that if you don't# specify at least one of K or E, no events will be delivered.notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a# small number of entries, and the biggest entry does not exceed a given# threshold. These thresholds can be configured using the following directives.# 当hash中包含超过指定元素个数并且最大的元素没有超过临界时,# hash将以一种特殊的编码方式(大大减少内存使用)来存储,这里可以设置这两个临界值hash-max-ziplist-entries 512hash-max-ziplist-value 64
# Similarly to hashes, small lists are also encoded in a special way in order# to save a lot of space. The special representation is only used when# you are under the following limits:# list数据类型多少节点以下会采用去指针的紧凑存储格式。# list数据类型节点值大小小于多少字节会采用紧凑存储格式。list-max-ziplist-entries 512list-max-ziplist-value 64
# Sets have a special encoding in just one case: when a set is composed# of just strings that happen to be integers in radix 10 in the range# of 64 bit signed integers.# The following configuration setting sets the limit in the size of the# set in order to use this special memory saving encoding.# set数据类型内部数据如果全部是数值型,且包含多少节点以下会采用紧凑格式存储。set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in# order to save a lot of space. This encoding is only used when the length and# elements of a sorted set are below the following limits:
# zsort数据类型多少节点以下会采用去指针的紧凑存储格式。# zsort数据类型节点值大小小于多少字节会采用紧凑存储格式。zset-max-ziplist-entries 128zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the# 16 bytes header. When an HyperLogLog using the sparse representation crosses# this limit, it is converted into the dense representation.## A value greater than 16000 is totally useless, since at that point the# dense representation is more memory efficient.## The suggested value is ~ 3000 in order to have the benefits of# the space efficient encoding without slowing down too much PFADD,# which is O(N) with the sparse encoding. The value can be raised to# ~ 10000 when CPU is not a concern, but space is, and the data set is# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.hll-sparse-max-bytes 3000
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in# order to help rehashing the main Redis hash table (the one mapping top-level# keys to values). The hash table implementation Redis uses (see dict.c)# performs a lazy rehashing: the more operation you run into a hash table# that is rehashing, the more rehashing "steps" are performed, so if the# server is idle the rehashing is never complete and some more memory is used# by the hash table.## The default is to use this millisecond 10 times every second in order to# actively rehash the main dictionaries, freeing memory when possible.## If unsure:# use "activerehashing no" if you have hard latency requirements and it is# not a good thing in your environment that Redis can reply from time to time# to queries with 2 milliseconds delay.## use "activerehashing yes" if you don't have such hard requirements but# want to free memory asap when possible.
# Redis将在每100毫秒时使用1毫秒的CPU时间来对redis的hash表进行重新hash,可以降低内存的使用# 当你的使用场景中,有非常严格的实时性需要,不能够接受Redis时不时的对请求有2毫秒的延迟的话,把这项配置为no。# 如果没有这么严格的实时性要求,可以设置为yes,以便能够尽可能快的释放内存activerehashing yes
# The client output buffer limits can be used to force disconnection of clients# that are not reading data from the server fast enough for some reason (a# common reason is that a Pub/Sub client can't consume messages as fast as the# publisher can produce them).## The limit can be set differently for the three different classes of clients:## normal -> normal clients including MONITOR clients# slave -> slave clients# pubsub -> clients subscribed to at least one pubsub channel or pattern## The syntax of every client-output-buffer-limit directive is the following:## client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>## A client is immediately disconnected once the hard limit is reached, or if# the soft limit is reached and remains reached for the specified number of# seconds (continuously).# So for instance if the hard limit is 32 megabytes and the soft limit is# 16 megabytes / 10 seconds, the client will get disconnected immediately# if the size of the output buffers reach 32 megabytes, but will also get# disconnected if the client reaches 16 megabytes and continuously overcomes# the limit for 10 seconds.## By default normal clients are not limited because they don't receive data# without asking (in a push way), but just after a request, so only# asynchronous clients may create a scenario where data is requested faster# than it can read.## Instead there is a default limit for pubsub and slave clients, since# subscribers and slaves receive data in a push fashion.## Both the hard or the soft limit can be disabled by setting them to zero.client-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like# closing connections of clients in timeout, purging expired keys that are# never requested, and so forth.## Not all tasks are performed with the same frequency, but Redis checks for# tasks to perform according to the specified "hz" value.## By default "hz" is set to 10. Raising the value will use more CPU when# Redis is idle, but at the same time will make Redis more responsive when# there are many keys expiring at the same time, and timeouts may be# handled with more precision.## The range is between 1 and 500, however a value over 100 is usually not# a good idea. Most users should use the default of 10 and raise this up to# 100 only in environments where very low latency is required.hz 10
# When a child rewrites the AOF file, if the following option is enabled# the file will be fsync-ed every 32 MB of data generated. This is useful# in order to commit the file to the disk more incrementally and avoid# big latency spikes.# aof rewrite过程中,是否采取增量文件同步策略,默认为“yes”。 rewrite过程中,每32M数据进行一次文件同步,这样可以减少aof大文件写入对磁盘的操作次数aof-rewrite-incremental-fsync yes
# redis数据存储redis的存储分为内存存储、磁盘存储和log文件三部分,配置文件中有三个参数对其进行配置。save seconds updates,save配置,指出在多长时间内,有多少次更新操作,就将数据同步到数据文件。可多个条件配合,默认配置了三个条件。appendonly yes/no ,appendonly配置,指出是否在每次更新操作后进行日志记录,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为redis本身同步数据文件是按上面的save条件来同步的,所以有的数据会在一段时间内只存在于内存中。appendfsync no/always/everysec ,appendfsync配置,no表示等操作系统进行数据缓存同步到磁盘,always表示每次更新操作后手动调用fsync()将数据写到磁盘,everysec表示每秒同步一次。
来源:https://www.cnblogs.com/tianshug/p/10924159.html