spring-kafka

Multiple consumers using spring kafka

浪子不回头ぞ 提交于 2019-12-07 11:45:15
问题 I am looking to setup multiple listeners on a kafka topic inside my application. Below is my setup. it is supposed to be consumed by both the groups, but it is consumed by only one listener. What am i missing here? @Bean public Map<String, Object> consumerConfigs() { Map<String, Object> props = new HashMap<String, Object>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); props.put

Spring Kafka listener using DeadLetterPublishingRecoverer and manual ack mode

╄→尐↘猪︶ㄣ 提交于 2019-12-07 11:17:22
问题 I'm having a hard time to understand how this could be solved so I'm asking it here in the hope someone else already faced the same problems. We are running our @KafkaListener with manual ack mode and a dead letter recoverer with a retry limit of 3. Manual ack mode is needed due to the business logic that we dont ack a message and pause consuming for 5 minutes when certain circumstances are given (external dependencies). Also we do need the dead letter queue for messages we cannot process for

How to set acknowledgement in case when retry gets exhausted in Kafka consumer

余生颓废 提交于 2019-12-06 16:01:46
I have a Kafka consumer which retry 5 time and I am using Spring Kafka with retry template . Now if all retry are failed then how to does acknowledge work in that case . Also if i have set acknowledge mode to manually then how to acknowledge those message Consumer @Bean("kafkaListenerContainerFactory") public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(RetryTemplate retryTemplate) { ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(consumerFactory()); factory

DefaultKafkaProducerFactory with transactionIdPrefix endless waits when bootstrap server is down

寵の児 提交于 2019-12-06 13:15:51
Hy, I'm using spring-kafka 1.3.0.RELEASE to create a transactional producer. When the bootstrap server is down, the DefaultKafkaProducerFactory waits endlessly until the bootstrap server is up. What am I doing wrong ? Can I set a timeout and/or other similar properties ? This is an example of my code to reproduce the scenario: public static void main(String[] args) { final DefaultKafkaProducerFactory<Object, Object> producerFactory = new DefaultKafkaProducerFactory<>(producerConfigs()); producerFactory.setTransactionIdPrefix("transactionIdPrefix"); final Producer<Object, Object> producer =

how to get KafkaListener Consumer

谁说胖子不能爱 提交于 2019-12-06 09:59:42
I use spring-kafka for springboot 2.0.4.RELEASE. And use KafkaListener for get message Now I want to reset the offset for my group But i do not how to get the consumer for the group @KafkaListener(id="test",topics={"test"},groupId="group",containerFactory="batchContainerFactory") public String listenTopic33(List<ConsumerRecord<Integer, String>> record, Acknowledgment ack){ // do something } @Autowired KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry; public void test() { MessageListenerContainer test3 = kafkaListenerEndpointRegistry.getListenerContainer("test3"); } If you want to

Can a single Spring's KafkaConsumer listener listens to multiple topic?

会有一股神秘感。 提交于 2019-12-06 03:36:19
问题 Anyone know if a single listener can listens to multiple topic like below? I know just "topic1" works, what if I want to add additional topics? Can you please show example for both below? Thanks for the help! @KafkaListener(topics = "topic1,topic2") public void listen(ConsumerRecord<?, ?> record, Acknowledgment ack) { System.out.println(record); } or ContainerProperties containerProps = new ContainerProperties(new TopicPartitionInitialOffset("topic1, topic2", 0)); 回答1: Yes, just follow the

How do I configure spring-kafka to ignore messages in the wrong format?

南楼画角 提交于 2019-12-06 00:24:08
We have an issue with one of our Kafka topics which is consumed by the DefaultKafkaConsumerFactory & ConcurrentMessageListenerContainer combination described here with a JsonDeserializer used by the Factory. Unfortunately someone got a little enthusiastic and published some invalid messages onto the topic. It appears that spring-kafka silently fails to process past the first of these messages. Is it possible to have spring-kafka log an error and continue? Looking at the error messages which are logged it seems that perhaps the Apache kafka-clients library should deal with the case that when

Spring Kafka The class is not in the trusted packages

六眼飞鱼酱① 提交于 2019-12-05 23:31:11
问题 In my Spring Boot/Kafka application before the library update, I used the following class org.telegram.telegrambots.api.objects.Update in order to post messages to the Kafka topic. Right now I use the following org.telegram.telegrambots.meta.api.objects.Update . As you may see - they have different packages. After application restart I ran into the following issue: [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] o.s.kafka.listener.LoggingErrorHandler : Error while

How to handle SerializationException after deserialization

独自空忆成欢 提交于 2019-12-05 21:09:20
问题 I am using Avro and Schema registry with my Spring Kafka setup. I would like to somehow handle the SerializationException , which might be thrown during deserialization. I found the following two resource: https://github.com/spring-projects/spring-kafka/issues/164 How do I configure spring-kafka to ignore messages in the wrong format? These resources suggest that I return null instead of throwing an SerializationException when deserializing and listen for KafkaNull . This solution works just

Spring Kafka listener using DeadLetterPublishingRecoverer and manual ack mode

末鹿安然 提交于 2019-12-05 14:27:52
I'm having a hard time to understand how this could be solved so I'm asking it here in the hope someone else already faced the same problems. We are running our @KafkaListener with manual ack mode and a dead letter recoverer with a retry limit of 3. Manual ack mode is needed due to the business logic that we dont ack a message and pause consuming for 5 minutes when certain circumstances are given (external dependencies). Also we do need the dead letter queue for messages we cannot process for some reason. Now the problem in manual ack mode is that our listener/consumer does not acknowledge the