spring-kafka

How to use Micrometer Timer to record duration of async method (returns Mono or Flux)

做~自己de王妃 提交于 2020-01-14 10:11:18
问题 I'd like to use Micrometer to record the execution time of an async method when it eventually happens. Is there a recommended way to do this? Example: Kafka Replying Template. I want to record the time it takes to actually execute the sendAndReceive call (sends a message on a request topic and receives a response on a reply topic). public Mono<String> sendRequest(Mono<String> request) { return request .map(r -> new ProducerRecord<String, String>(requestsTopic, r)) .map(pr -> { pr.headers()

How to create separate Kafka listener for each topic dynamically in springboot?

百般思念 提交于 2020-01-12 07:39:29
问题 I am new to Spring and Kafka. I am working on a use case [using SpringBoot-kafka] where in users are allowed to create kafka topics at runtime. The spring application is expected to subscribe to these topics pro-grammatically at runtime. What i know so far is that, Kafka listener are design time and hence topics needs to be specified before startup. Is there a way to dynamically subscribe to kafka topics in SpringBoot-Kafka integration? Referred this https://github.com/spring-projects/spring

Spring Cloud Kafka Stream Unable to create Producer Config Error

人盡茶涼 提交于 2020-01-12 05:24:49
问题 I have two Spring boot project with Kafka-stream dependencies, they have exactly same dependencies in gradle and exactly same configurations, yet one of the project when started logs error as below 11:35:37.974 [restartedMain] INFO o.a.k.c.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [192.169.0.109:6667] client.id = client connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO

Spring-boot and Spring-Kafka compatibility matrix

末鹿安然 提交于 2020-01-11 10:21:51
问题 I am looking for a compatibility matrix of the different parts of the Spring framework. More specifically, I am looking for the newest Spring-Kafka version that is compatible with Spring-boot 1.5.2. I found an old compatibility matrix of Spring, but this matrix was from 2014 and therefore deprecated. I am not concerned about Spring-Kafka and Apache Kafka client compatibility nor am I concerned about Apache Kafka java client and Kafka broker compatibility. This compatibility matrices are

Update KTable based on partial data attributes

依然范特西╮ 提交于 2020-01-05 17:57:51
问题 I am trying to update a KTable with partial data of an object. Eg. User object is {"id":1, "name":"Joe", "age":28} The object is being streamed into a topic and grouped by key into KTable. Now the user object is updated partially as follows {"id":1, "age":33} and streamed into table. But the updated table looks as follows {"id":1, "name":null, "age":28} . The expected output is {"id":1, "name":"Joe", "age":33} . How can I use Kafka streams and spring cloud streams to achieve the expected

Kafka Docker and port forwarding from 9092 to 9093

别等时光非礼了梦想. 提交于 2020-01-05 07:38:08
问题 I have configured Kafka + ZooKeeper via maven docker plugin https://dmp.fabric8.io/:  <image> <name>wurstmeister/kafka:1.0.0</name> <alias>kafka</alias> <run> <ports> <port>9092:9092</port> </ports> <links> <link>zookeeper:zookeeper</link> </links> <env> <KAFKA_ADVERTISED_HOST_NAME>127.0.0.1</KAFKA_ADVERTISED_HOST_NAME> <KAFKA_ZOOKEEPER_CONNECT>zookeeper

How does Consumer.endOffsets work in Kafka?

无人久伴 提交于 2020-01-03 17:17:36
问题 Assume I've a timer task running indefinitely which iterates over the all the consumer groups in the kafka cluster and outputs lag, committed offset and end offset for all partitions for each group. Similar to how Kafka console consumer group script works except it's for all groups. Something like Single Consumer - Not Working - Doesn't return offsets for some of the provided topic partitions ( ex. 10 provided - 5 Offsets Returned ) Consumer consumer; static { consumer = createConsumer(); }

Spring-Boot and Kafka : How to handle broker not available?

和自甴很熟 提交于 2020-01-03 14:53:27
问题 While the spring-boot app is running and if I shutdown the broker completely ( both kafka and zookeeper ) I am seeing this warn in console for infinite amount of time. [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] WARN o.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=ResponseReceiveConsumerGroup] Connection to node 2147483647 could not be established. Broker may not be available. Is there a way in Spring Boot to handle this gracefully instead

How to set acknowledgement in case when retry gets exhausted in Kafka consumer

寵の児 提交于 2020-01-02 21:53:11
问题 I have a Kafka consumer which retry 5 time and I am using Spring Kafka with retry template . Now if all retry are failed then how to does acknowledge work in that case . Also if i have set acknowledge mode to manually then how to acknowledge those message Consumer @Bean("kafkaListenerContainerFactory") public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(RetryTemplate retryTemplate) { ConcurrentKafkaListenerContainerFactory<String, String> factory = new

Spring Kafka Template implementaion example for seek offset, acknowledgement

假装没事ソ 提交于 2020-01-01 20:49:32
问题 I am new to spring-kafka-template . I tried some basic stuff in it and they are working fine. But I am trying to implement some concepts mentioned at Spring Docs like : Offset Seeking Acknowledging listeners I tried to find some example for it over net but was unsuccessful. The thing only I found is its source code. We have a same issue as mentioned in this post Spring kafka consumer, seek offset at runtime. But there is no example available to implement the same. Can someone give any example