Kafka work queue with a dynamic number of parallel consumers

强颜欢笑 提交于 2019-12-22 17:46:42

问题


I want to use Kafka to "divide the work". I want to publish instances of work to a topic, and run a cloud of identical consumers to process them. As each consumer finishes its work, it will pluck the next work from the topic. Each work should only be processed once by one consumer. Processing work is expensive, so I will need many consumers running on many machines to keep up. I want the number of consumers to grow and shrink as needed (I plan to use Kubernetes for this).

I found a pattern where a unique partition is created for each consumer. This "divides the work", but the number of partitions is set when the topic is created. Furthermore, the topic must be created on the command line e.g.

bin/kafka-topics.sh --zookeeper localhost:2181 --partitions 3 --topic divide-topic --create --replication-factor 1

...

for n in range(0,3):
    consumer = KafkaConsumer(
                     bootstrap_servers=['localhost:9092'])
    partition = TopicPartition('divide-topic',n)
    consumer.assign([partition])
    ...

I could create a unique topic for each consumer, and write my own code to assign work to those topic. That seems gross, and I still have to create topics via the command line.

A work queue with a dynamic number of parallel consumers is a common architecture. I can't be the first to need this. What is the right way to do it with Kafka?


回答1:


The pattern you found is accurate. Note that topics can also be created using the Kafka Admin API and partitions can also be added once a topic has been created (with some gotchas).

In Kafka, the way to divide work and allow scaling is to use partitions. This is because in a consumer group, each partition is consumed by a single consumer at any time.

For example, you can have a topic with 50 partitions and a consumer group subscribed to this topic:

  • When the throughput is low, you can have only a few consumers in the group and they should be able to handle the traffic.

  • When the throughput increases, you can add consumers, up to the number of partitions (50 in this example), to pick up some of the work.

In this scenario, 50 consumers is the limit in terms of scaling. Consumers expose a number of metrics (like lag) allowing you to decide if you have enough of them at any time




回答2:


Thank you Mickael for pointing me in the correct direction.

https://www.safaribooksonline.com/library/view/kafka-the-definitive/9781491936153/ch04.html

Kafka consumers are typically part of a consumer group. When multiple
consumers are subscribed to a topic and belong to the same consumer group,
each consumer in the group will receive messages from a different subset of
the partitions in the topic.

https://dzone.com/articles/dont-use-apache-kafka-consumer-groups-the-wrong-wa,

Having consumers as part of the same consumer group means providing the
“competing consumers” pattern with whom the messages from topic partitions
are spread across the members of the group. Each consumer receives messages 
from one or more partitions (“automatically” assigned to it) and the same
messages won’t be received by the other consumers (assigned to different 
partitions). In this way, we can scale the number of the consumers up to the
number of the partitions (having one consumer reading only one partition); in
this case, a new consumer joining the group will be in an idle state without 
being assigned to any partition.

Example code for dividing the work among 3 consumers, up to a maximum of 100:

bin/kafka-topics.sh --partitions 100 --topic divide-topic --create --replication-factor 1 --zookeeper localhost:2181

...

for n in range(0,3):
    consumer = KafkaConsumer(group_id='some-constant-group',
                     bootstrap_servers=['localhost:9092'])
    ...



回答3:


I think, you are on right path -

Here are some steps involved -

  1. Create Kafka Topic and create the required partitions. The number of partitions is the unit of parallelism. In other words you run these many number of consumers to process the work.
  2. You can increase the partitions if the scaling requirements increased. BUT it comes with caveats like repartitioning. Please read the kafka documentation about the new partition addition.
  3. Define a Kafka Consumer group for the consumer. Kafka will assign partitions to available consumers in the consumer group and automatically rebalance. If the consumer is added/removed, kafka does the rebalancing automatically.
  4. If the consumers are packaged as docker container, then using kubernetes helps in managing the containers especially for multi-node environment. Other tools include docker-swarm, openshift, Mesos etc.
  5. Kafka offers the ordering for partitions.
  6. Check out the delivery guarantees - At-least once, Exactly once based on your use cases.

Alternatively, you can use Kafka Streams APIS. Kafka Streams is a client library for processing and analyzing data stored in Kafka. It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple yet efficient management and real-time querying of application state.



来源:https://stackoverflow.com/questions/51372210/kafka-work-queue-with-a-dynamic-number-of-parallel-consumers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!