Spring-Kafka Concurrency Property

后端 未结 2 1768
忘掉有多难
忘掉有多难 2021-01-02 12:23

I am progressing on writing my first Kafka Consumer by using Spring-Kafka. Had a look at the different options provided by framework, and have few doubts on the same. Can so

2条回答
  •  死守一世寂寞
    2021-01-02 13:04

    Q1:

    From the documentation,

    The @KafkaListener annotation is used to designate a bean method as a listener for a listener container. The bean is wrapped in a MessagingMessageListenerAdapter configured with various features, such as converters to convert the data, if necessary, to match the method parameters.

    You can configure most attributes on the annotation with SpEL by using "#{…​} or property placeholders (${…​}). See the Javadoc for more information."

    This approach can be useful for simple POJO listeners and you do not need to implement any interfaces. You are also enabled to listen on any topics and partitions in a declarative way using the annotations. You can also potentially return the value you received whereas in case of MessageListener, you are bound by the signature of the interface.

    Q2:

    Ideally yes. If you have multiple topics to consume from, it gets more complicated though. Kafka by default uses RangeAssignor which has its own behaviour (you can change this -- see more details under).

    Q3:

    If your consumer dies, there will be rebalancing. If you acknowledge manually and your consumer dies before committing offsets, you do not need to do anything, Kafka handles that. But you could end up with some duplicate messages (at-least once)

    Q4:

    It depends what you mean by "performance". If you meant latency, then consuming each record as fast as possible will be the way to go. If you want to achieve high throughput, then batch consumption is more efficient.

    I had written some samples using Spring kafka and various listeners - check out this repo

提交回复
热议问题