spring-kafka

Kafka streams exactly once delivery

家住魔仙堡 提交于 2019-12-31 04:47:35
问题 My goal is to consume from topic A, do some processing and produce to topic B, as a single atomic action. To achieve this I see two options: Use a spring-kafka @Kafkalistener and a KafkaTemplate as described here. Use Streams eos (exactly-once) functionality. I have successfully verified option #1. By successfully, I mean that if my processing fails (IllegalArgumentException is thrown) the consumed message from topic A keeps being consumed by the KafkaListener. This is what I expect, as the

Health for Kafka Binder is always UNKNOWN

妖精的绣舞 提交于 2019-12-31 03:51:17
问题 When I try to activate the health indicator for the kafka binder as explained in Spring Cloud Stream Reference Documentation the health endpoint returns: binders":{"status":"UNKNOWN","kafka":{"status":"UNKNOWN"}}} my configuration contains as documented: management.health.binders.enabled=true I already debugged BindersHealthIndicatorAutoConfiguration and noticed, that no HealthIndicator is registered in the binderContext . Do I have to register a custom HealthIndicator as bean or what steps

Tombstone messages not removing record from KTable state store?

会有一股神秘感。 提交于 2019-12-30 10:33:49
问题 I am creating KTable processing data from KStream. But when I trigger a tombstone messages with key and null payload, it is not removing message from KTable. sample - public KStream<String, GenericRecord> processRecord(@Input(Channel.TEST) KStream<GenericRecord, GenericRecord> testStream, KTable<String, GenericRecord> table = testStream .map((genericRecord, genericRecord2) -> KeyValue.pair(genericRecord.get("field1") + "", genericRecord2)) .groupByKey() reduce((genericRecord, v1) -> v1,

How to write Kafka consumers - single threaded vs multi threaded

耗尽温柔 提交于 2019-12-30 05:59:17
问题 I have written a single Kafka consumer (using Spring Kafka), that reads from a single topic and is a part of a consumer group. Once a message is consumed, it will perform all downstream operations and move on to the next message offset. I have packaged this as a WAR file and my deployment pipeline pushes this out to a single instance. Using my deployment pipeline, I could potentially deploy this artifact to multiple instances in my deployment pool. However, I am not able to understand the

How to write Kafka consumers - single threaded vs multi threaded

我是研究僧i 提交于 2019-12-30 05:59:04
问题 I have written a single Kafka consumer (using Spring Kafka), that reads from a single topic and is a part of a consumer group. Once a message is consumed, it will perform all downstream operations and move on to the next message offset. I have packaged this as a WAR file and my deployment pipeline pushes this out to a single instance. Using my deployment pipeline, I could potentially deploy this artifact to multiple instances in my deployment pool. However, I am not able to understand the

Transaction Synchronization in Spring Kafka

醉酒当歌 提交于 2019-12-29 03:23:06
问题 I want to synchronize a kafka transaction with a repository transaction: @Transactional public void syncTransaction(){ myRepository.save(someObject) kafkaTemplate.send(someEvent) } Since the merge (https://github.com/spring-projects/spring-kafka/issues/373) and according to the doc this is possible. Nevertheless i have problems to understand and implement that feature. Looking at the example in https://docs.spring.io/spring-kafka/reference/htmlsingle/#_transaction_synchronization I have to

Is it possible to get exactly once processing with Spring Cloud Stream?

自作多情 提交于 2019-12-25 01:50:26
问题 Currently I'm using SCS with almost default configuration for sending and receiving message between microservices. Somehow I've read this https://www.confluent.io/blog/enabling-exactly-kafka-streams and wonder that it is gonna works or not if we just put the property called "processing.guarantee" with value "exactly-once" there through properties in Spring boot application ? 回答1: In the context of your question you should look at Spring Cloud Stream as just a delegate between target system (e

Setting up Kafka on Openshift with strimzi

牧云@^-^@ 提交于 2019-12-25 01:49:10
问题 I am trying to set up a kafka cluster on the Openshift platform using this guide: https://developers.redhat.com/blog/2018/10/29/how-to-run-kafka-on-openshift-the-enterprise-kubernetes-with-amq-streams/ I have my zookeeper and kafka clusters running as shown here: and when running my application as the bootstrap-servers I input the route to the my-cluster-kafka-external bootstrap. But when I try to send a message to Kafka i get this message: 21:32:40.548 [http-nio-8080-exec-1] ERROR o.s.k.s

Unable to consume Kafka messages within Spring Boot

强颜欢笑 提交于 2019-12-25 01:35:12
问题 We have a Java application which consumes Kafka messages, using org.apache.kafka.clients.consumer.KafkaConsumer We have created a Spring Boot application with a Spring-Kafka dependency, but are unable to read the messages within the new project. Have checked the obvious parameters, including hostname and port of the bootstrap servers (which the logs show are recognized), the group, the topic and that Spring Boot, like the original consumer, uses StringDeserializer . Here is our configuration

Spring Kafka : Subscribe to a new Topic Pattern during Runtime

孤人 提交于 2019-12-25 01:15:02
问题 I'm using the annotation @KafkaListener to consume topics in my application. I need to change the topic pattern at runtime in an already running consumer so that new topics that match the new pattern can be consumed. I tried the below code, but it still consumes the topics matching the old topic pattern. Here, I have set the "old-topic-pattern" at application start-up. Then, I'm updating the pattern to "new-topic-pattern" every 10 seconds using a Spring @Scheduler. Class