Redis as a message broker

两盒软妹~` 提交于 2021-01-21 08:15:20

问题


Question

I want to pass data between applications, in a publish-subscribe manner. Data may be produced at a much higher rate than consumed and messages get lost, which is not a problem. Imagine a fast sensor and a slow sensor data processor. For that, I use redis pub/sub and wrote a class which acts as a subscriber, receives every message and puts that into a buffer. The buffer is overwritten when a new message comes in or nullified when the message is requested by the "real" function. So when I ask this class, I immediately get a response (hint that my function is slower than data comes in) or I have to wait (hint that my function is faster than the data).

This works pretty good for the case that data comes in fast. But for data which comes in relatively seldom, let's say every five seconds, this does not work: imagine my consumer gets launched slightly after the producer, the first message is lost and my consumer needs to wait nearly five seconds, until it can start working.

I think I have to solve this with Redis tools. Instead of a pub/sub, I could simply use the get/set methods, thus putting the cache functionality into Redis directly. But then, my consumer would have to poll the database instead of the event magic I have at the moment. Keys could look like "key:timestamp", and my consumer now has to get key:* and compare the timestamps permamently, which I think would cause a lot of load. There is no natural possibility to sleep, since although I don't care about dropped messages (there is nothing I can do about), I do care about delay.

Does someone use Redis for a similar thing and could give me a hint about clever use of Redis tools and data structures?

edit

Ideally, my program flow would look like this:

  • start the program
  • retrieve key from Redis
  • tell Redis, "hey, notify me on changes of key".
  • launch something asynchronously, with a callback for new messages.

By writing this, an idea came up: The publisher not only publishes message on topic key, but also set key message. This way, an application could initially get and then subscribe.

Good idea or not really?

What I did after I got the answer below (the accepted one)

Keyspace notifications are really what I need here. Redis acts as the primary source for information, my client subscribes to keyspace notifications, which notify the subscribers about events affecting specific keys. Now, in the asynchronous part of my client, I subscribe to notifications about my key of interest. Those notifications set a key_has_updates flag. When I need the value, I get it from Redis and unset the flag. With an unset flag, I know that there is no new value for that key on the server. Without keyspace notifications, this would have been the part where I needed to poll the server. The advantage is that I can use all sorts of data structures, not only the pub/sub mechanism, and a slow joiner which misses the first event is always able to get the initial value, which with pub/sib would have been lost.

When I need the value, I obtain the value from Redis and set the flag to false.


回答1:


One idea is to push the data to a list (LPUSH) and trim it (LTRIM), so it doesn't grow forever if there are no consumers. On the other end, the consumer would grab items from that list and process them. You can also use keyspace notifications, and be alerted each time an item is added to that queue.




回答2:


I pass data between application using two native redis command: rpush and blpop . "blpop blocks the connection when there are no elements to pop from any of the given lists".

  • Data are passed in json format, between application using list as queue.
  • Application that want send data (act as publisher) make a rpush on a list
  • Application that want receive data (act as subscriber) make a blpop on the same list

The code shuold be (in perl language)


Sender (we assume an hash pass)

#Encode hash in json format
my $json_text = encode_json \%$hash_ref;

#Connect to redis and send to list
my $r = Redis->new(server => "127.0.0.1:6379");
$r->rpush("shared_queue","$json_text");
$r->quit;

Receiver (into a infinite loop)

while (1) {
    my $r = Redis->new(server => "127.0.0.1:6379");
    my @elem =$r->blpop("shared_queue",0);

    #Decode hash element
    my $hash_ref=decode_json($elem\[1]);

    #make some stuff
}

I find this way very usefull for many reasons:

  • The element are stored into list, so temporary disabling of receiver has no information loss. When recevier restart, can process all items into the list.
  • High rate of sender can be handled with multiple instance of receiver.
  • Multiple sender can send data on unique list. In ths case should be easily implmented a data collector
  • Receiver process that act as daemon can be monitored with specific tools (e.g. pm2)



回答3:


From Redis 5, there is new data type called "Streams" which is append-only datastructure. The Redis streams can be used as reliable message queue with both point to point and multicast communication using consumer group concept Redis_Streams_MQ



来源:https://stackoverflow.com/questions/25133260/redis-as-a-message-broker

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!