spring-kafka

How to use Spring Kafka's Acknowledgement.acknowledge() method for manual commit

瘦欲@ 提交于 2019-12-05 03:24:19
问题 I am using Spring Kafka first time and I am not able to use Acknowledgement.acknowledge() method for manual commit in my consumer code as mentioned here https://docs.spring.io/spring-kafka/reference/html/_reference.html#committing-offsets. Mine is spring-boot application. If I am not using manual commit process than my code is working fine. But when I use Acknowledgement.acknowledge() for manual commit it shows error related to bean. Also If I am not using manual commit properly please

Spring Kafka Template implementaion example for seek offset, acknowledgement

那年仲夏 提交于 2019-12-04 21:17:31
I am new to spring-kafka-template . I tried some basic stuff in it and they are working fine. But I am trying to implement some concepts mentioned at Spring Docs like : Offset Seeking Acknowledging listeners I tried to find some example for it over net but was unsuccessful. The thing only I found is its source code. We have a same issue as mentioned in this post Spring kafka consumer, seek offset at runtime . But there is no example available to implement the same. Can someone give any example on how to implement them? Thanks in advance. You should use ConsumerSeekAware for that purpose to

How to write Unit test for @KafkaListener?

安稳与你 提交于 2019-12-04 19:37:38
Trying to figure out if I can write unit test for @KafkaListener using spring-kafka and spring-kafka-test. My Listener class. public class MyKafkaListener { @Autowired private MyMessageProcessor myMessageProcessor; @KafkaListener(topics = "${kafka.topic.01}", groupId = "SF.CLIENT", clientIdPrefix = "SF.01", containerFactory = "myMessageListenerContainerFactory") public void myMessageListener(MyMessage message) { myMessageProcessor.process(message); log.info("MyMessage processed"); }} My Test class : @RunWith(SpringRunner.class) @DirtiesContext @EmbeddedKafka(partitions = 1, topics = {"I1.Topic

Spring Kafka Producer not sending to Kafka 1.0.0 (Magic v1 does not support record headers)

时光总嘲笑我的痴心妄想 提交于 2019-12-04 18:01:39
问题 I am using this docker-compose setup for setting up Kafka locally: https://github.com/wurstmeister/kafka-docker/ docker-compose up works fine, creating topics via shell works fine. Now I try to connect to Kafka via spring-kafka:2.1.0.RELEASE When starting up the Spring application it prints the correct version of Kafka: o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0 o.a.kafka.common.utils.AppInfoParser : Kafka commitId : aaa7af6d4a11b29d I try to send a message like this

Can a single Spring's KafkaConsumer listener listens to multiple topic?

谁说胖子不能爱 提交于 2019-12-04 07:35:59
Anyone know if a single listener can listens to multiple topic like below? I know just "topic1" works, what if I want to add additional topics? Can you please show example for both below? Thanks for the help! @KafkaListener(topics = "topic1,topic2") public void listen(ConsumerRecord<?, ?> record, Acknowledgment ack) { System.out.println(record); } or ContainerProperties containerProps = new ContainerProperties(new TopicPartitionInitialOffset("topic1, topic2", 0)); Yes, just follow the @KafkaListener JavaDocs: /** * The topics for this listener. * The entries can be 'topic name', 'property

Reactor Kafka: Exactly Once Processing Sample

人走茶凉 提交于 2019-12-04 06:46:18
问题 I've read many articles where there are many different configurations to achieve exactly once processing. Here is my producer config: final Map<String, Object> props = Maps.newConcurrentMap(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true"); props.put

Spring Kafka The class is not in the trusted packages

早过忘川 提交于 2019-12-04 04:21:15
In my Spring Boot/Kafka application before the library update, I used the following class org.telegram.telegrambots.api.objects.Update in order to post messages to the Kafka topic. Right now I use the following org.telegram.telegrambots.meta.api.objects.Update . As you may see - they have different packages. After application restart I ran into the following issue: [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] o.s.kafka.listener.LoggingErrorHandler : Error while processing: null org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for

Spring Kafka: Multiple Listeners for different objects within an ApplicationContext

安稳与你 提交于 2019-12-04 03:41:04
Can I please check with the community what is the best way to listen to multiple topics, with each topic containing a message of a different class? I've been playing around with Spring Kafka for the past couple of days. My thought process so far: Because you need to pass your deserializer into DefaultKafkaConsumerFactory when initializing a KafkaListenerContainerFactory. This seems to indicate that if I need multiple containers each deserializing a message of a different type, I will not be able to use the @EnableKafka and @KafkaListener annotations. This leads me to think that the only way to

Spring kafka consumer, seek offset at runtime?

我与影子孤独终老i 提交于 2019-12-04 01:28:38
问题 I am using the KafkaMessageListenerContainer for consuming from the kafka topic, I have an application logic to process each record which is dependent on other micro services as well. I am now manually committing the offset after each record is processed. But if I the application logic fails I need to seek to the failed offset and keep processing it until it's succeeds. For that I need to do a run time manual seek of the last offset. Is this possible with the KafkaMessageListenerContainer yet

Kafka in Kubernetes - Marking the coordinator dead for group

情到浓时终转凉″ 提交于 2019-12-03 17:36:51
问题 I am pretty new to Kubernetes and wanted to setup Kafka and zookeeper with it. I was able to setup Apache Kafka and Zookeeper in Kubernetes using StatefulSets. I followed this and this to build my manifest file. I made 1 replica of kafka and zookeeper each and also used persistent volumes. All pods are running and ready. I tried to expose kafka and used Service for this by specifying a nodePort(30010). Seemingly this would expose kafka to the outside world where they can send messages to the