spring-kafka

How to detect when a spring kafka consumer stops getting messages from 1 partition?

▼魔方 西西 提交于 2019-12-13 02:58:06
问题 I have have 3 spring kafka consumers (same group) getting messages from 3 partitions. I want to detect when one of these consumers stops reading from 1 partition (other 2 consumers continue reading from the other 2 partitions). This has happened twice so far and when detected, it is easy to fix by restarting all consumers which causes a re-balance. The problem is on both occasions it would have been good to know earlier. So I tried using ListenerContainerIdleEvent like so - @EventListener

Kafka Spring: How to create Listeners dynamically or in a loop?

℡╲_俬逩灬. 提交于 2019-12-12 13:17:21
问题 I have 4 ConsumerFactory listeners that are reading from 4 different topics like this: @KafkaListener( id = "test1", topicPattern = "test.topic1", groupId = "pp-test1") public void listenTopic1(ConsumerRecord<String, String> record) { System.out.println("Topic is: " + record.topic()); } But we'll have 50 topics and I want to set up atleast 25 listeners for betetr performance. How can I do this dynamically instead of manually writing 25 listeners? 回答1: You cannot create @KafkaListener s

Spring @KafkaListener execute and poll records after certain interval

时光总嘲笑我的痴心妄想 提交于 2019-12-12 07:20:38
问题 We wanted to consume the records after a certain interval (e.g. every 5 minutes). Consumer properties are standard: @Bean public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>> kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(consumerFactory()); factory.setConcurrency(1); factory.setBatchListener(true); factory

Publish null/tombstone message with raw headers

为君一笑 提交于 2019-12-12 05:12:38
问题 I am building a Spring Cloud Stream Kafka processor app that will consume raw data with a String key and sometimes a null payload from a Kafka topic. I want to produce to another topic a String key and the null payload (known as a tombstone within Kafka). In order to use raw headers on the message, I need to output a byte[] , but if I encode KafkaNull.INSTANCE into a byte[] it will literally output a String of the object hashcode. If I try to send anything other than a byte[] , I can't use

Spring Kafka Consumer Client-Id configuration

☆樱花仙子☆ 提交于 2019-12-12 04:18:23
问题 I am where i have two Kafka Listener components, each listening to a different topic and expecting a different payload. My question is, can i use the same client-id for both or does it have to be different? If the client-id has to be different i am wanting to understand a use-case where the client-id can be effectively used. 回答1: According to the docs: An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port

Spring Cloud Stream default custom message headers

ε祈祈猫儿з 提交于 2019-12-12 03:54:44
问题 Is there a way to configure the default Message<T> headers when the message is generated from the method return value: @Publisher(channel = "theChannelname") public MyObject someMethod(Object param) { ... return myObject; } or @SendTo("theChannelname") public MyObject someMethod(Object param) { ... return myObject; } In the examples above the Message<MyObject> will be automatically generated. So, how can I control the default message generation? 回答1: You can do that via @Header annotation for

Invalid Keystore Format, BootStrap Broker Disconnected

送分小仙女□ 提交于 2019-12-11 23:42:02
问题 I am trying to develop a Kafka Consumer in Spring Boot. I am able to setup the Kafka Cluster in Kafka Tool and able to read the messages from it manually. I am using the same configs in Spring Boot as well but ended up with the below errors and this warning. 2019-06-10 13:45:40.036 WARN 8364 --- [ id3-0-C-1] org.apache.kafka.clients.NetworkClient : Bootstrap broker XXXXXX.DEVHADOOP.XXXX.COM:6768 disconnected 2019-06-10 13:45:40.038 WARN 8364 --- [ id1-0-C-1] org.apache.kafka.clients

Spring-Cloud-Stream-Kafka Custom Health check not working

一笑奈何 提交于 2019-12-11 21:48:36
问题 I am using spring-cloud-stream-kafka in my spring-boot(consumer) application.The health of the app is inaccurate, 'UP' even when the app can't connect to Kafka(Kafka broker is down). I have read articles on kafka health check. It looks like kafka health check is disabled in spring actuator health check. So, I managed to write the following code to enable kafka health check for my app. I think, I am missing some connection between the app config and my code and I don't see the Kafka health

Mirror kafka topics with custom configuration as a standalone application

↘锁芯ラ 提交于 2019-12-11 19:14:52
问题 I want to mirror some topics from one broker to another. Just part of all topics. There's an existed MirrorMaker tool for this. But I also want to change destination topic names. Also, Custom message handler already does it. Nevertheless, it doesn't fit my needs. There are several requirements for me: mirror topics with the possibility to provide overrides for each destination topic detect new source topics on the fly Run it a standalone application (f.e. Java, Spring Boot) instead of CLI

What is the proper way of doing @DirtiesConfig when used @EmbeddedKafka

扶醉桌前 提交于 2019-12-11 19:08:58
问题 We have a "little" problem in our project with: "Connection to node 0 could not be established. Broker may not be available." Tests runs very very long time, and this message is logged at least once every second. But I found out, how to get rid of it. Read on. If there is something incorrect in configurations/annotations, please let me know. Versions first: <springframework.boot.version>2.1.8.RELEASE</springframework.boot.version> which automatically brings <spring-kafka.version>2.2.8.RELEASE