How do you process messages in parallel while ensuring FIFO per entity?

前端 未结 3 1822
悲哀的现实
悲哀的现实 2020-12-11 05:11

Let\'s say you have an entity, say, \"Person\" in your system and you want to process events that modify various Person entities. It is important that:

  • Events
相关标签:
3条回答
  • 2020-12-11 05:49

    One general way to solve this problem (if I got your problem right) is to introduce some unique property for Person (say, database-level id of Person) and use hash of that property as index of FIFO queue to put that Person in.
    Since hash of that property can be unwieldy big (you can't afford 2^32 queues/threads), use only N the least significant bits of that hash. Each FIFO queue should have dedicated worker that will work upon it -- voila, your requirements are satisfied!

    This approach have one drawback -- your Persons must have well-distributed ids to make all queues work with more-or-less equal load. If you can't guarantee that, consider using round-robin set of queues and track which Persons are being processed now to ensure sequential processing for same person.

    0 讨论(0)
  • 2020-12-11 05:59

    If you already have a system that allows shared locks, why not have a lock for every queue, which consumers must acquire before they read from the queue?

    0 讨论(0)
  • 2020-12-11 06:06

    It looks like JMSXGroupID is what I'm looking for. From the ActiveMQ docs:

    http://activemq.apache.org/message-groups.html

    Their example use case with stock prices is exactly what I'm after. My only concern is what happens if the single consumer dies. Hopefully the broker will detect that and pick another consumer to associate with that group id.

    0 讨论(0)
提交回复
热议问题