spring-kafka

KStream send record to multiple streams (not Branch)

匆匆过客 提交于 2019-12-02 13:52:08
问题 Is there a way to make branch-like operation but to place record in each output stream which predicate evaluates to true? Brach puts record to first match (documentation: A record is placed to one and only one output stream on the first match). 回答1: You can "broadcast" and filter each stream individually: KStream stream = ... stream1 = stream.filter(...); stream2 = stream.filter(...); // and so on... If you use stream variable multiple times, all records are broadcasted to all downstream

Setting up Kafka on Openshift with strimzi

社会主义新天地 提交于 2019-12-02 10:19:56
I am trying to set up a kafka cluster on the Openshift platform using this guide: https://developers.redhat.com/blog/2018/10/29/how-to-run-kafka-on-openshift-the-enterprise-kubernetes-with-amq-streams/ I have my zookeeper and kafka clusters running as shown here: and when running my application as the bootstrap-servers I input the route to the my-cluster-kafka-external bootstrap. But when I try to send a message to Kafka i get this message: 21:32:40.548 [http-nio-8080-exec-1] ERROR o.s.k.s.LoggingProducerListener () - Exception thrown when sending a message with key='key' and payload='Event(id

Kafka streams exactly once delivery

☆樱花仙子☆ 提交于 2019-12-02 08:16:52
My goal is to consume from topic A, do some processing and produce to topic B, as a single atomic action. To achieve this I see two options: Use a spring-kafka @Kafkalistener and a KafkaTemplate as described here . Use Streams eos (exactly-once) functionality. I have successfully verified option #1. By successfully, I mean that if my processing fails (IllegalArgumentException is thrown) the consumed message from topic A keeps being consumed by the KafkaListener. This is what I expect, as the offset is not committed and DefaultAfterRollbackProcessor is used. I am expecting to see the same

Embedded Kafka: KTable+KTable leftJoin produces duplicate records

五迷三道 提交于 2019-12-02 08:16:10
问题 I come seeking knowledge of the arcane. First, I have two pairs of topics, with one topic in each pair feeding into the other topic. Two KTables are being formed by the latter topics, which are used in a KTable+KTable leftJoin. Problem is, the leftJoin producing THREE records when I produce a single record to either KTable. I would expect two records in the form (A-null, A-B) but instead I get (A-null, A-B, A-null). I have confirmed that the KTables are receiving a single record each. I have

How to listen for the right ACK message from Kafka

时光总嘲笑我的痴心妄想 提交于 2019-12-02 04:10:06
I am doing a POC with Spring Boot & Kafka for a transactional project and I have the following doubt: Scenario: One microservices MSPUB1 receives Requests from the customer. That requests publish a message on topic TRANSACTION_TOPIC1 on Kafka but that Microservice could receive multiple requests in parallel. The Microservice listens the topic TRANSACTION_RESULT1 to check that the transaction finished. In the other side of the Streaming Platform, another Microservice MSSUB1 is listening the topic TRANSACTION_TOPIC1 and process all messages and publish the results on: TRANSACTION_RESULT1 What is

Health for Kafka Binder is always UNKNOWN

末鹿安然 提交于 2019-12-02 03:25:17
When I try to activate the health indicator for the kafka binder as explained in Spring Cloud Stream Reference Documentation the health endpoint returns: binders":{"status":"UNKNOWN","kafka":{"status":"UNKNOWN"}}} my configuration contains as documented: management.health.binders.enabled=true I already debugged BindersHealthIndicatorAutoConfiguration and noticed, that no HealthIndicator is registered in the binderContext . Do I have to register a custom HealthIndicator as bean or what steps are necessary? It looks like a bug in the documentation. By default, the binders health indicators are

Spring cloud stream kafka pause/resume binders

感情迁移 提交于 2019-12-02 03:07:58
问题 We are using spring cloude stream 2.0 & Kafka as a message broker. We've implemented a circuit breaker which stops the Application context, for cases where the target system (DB or 3rd party API) is unavilable, as suggested here: Stop Spring Cloud Stream @StreamListener from listening when target system is down Now in spring cloud stream 2.0 there is a way to manage the lifecycle of binder using actuator: Binding visualization and control Is it possible to control the binder lifecycle from

Spring cloud stream kafka pause/resume binders

ⅰ亾dé卋堺 提交于 2019-12-02 00:36:41
We are using spring cloude stream 2.0 & Kafka as a message broker. We've implemented a circuit breaker which stops the Application context, for cases where the target system (DB or 3rd party API) is unavilable, as suggested here: Stop Spring Cloud Stream @StreamListener from listening when target system is down Now in spring cloud stream 2.0 there is a way to manage the lifecycle of binder using actuator: Binding visualization and control Is it possible to control the binder lifecycle from the code, means in case target server is down, to pause the binder, and when it's up, to resume ? Sorry,

Reading the same message several times from Kafka

给你一囗甜甜゛ 提交于 2019-12-01 20:03:10
I use Spring Kafka API to implement Kafka consumer with manual offset management: @KafkaListener(topics = "some_topic") public void onMessage(@Payload Message message, Acknowledgment acknowledgment) { if (someCondition) { acknowledgment.acknowledge(); } } Here, I want the consumer to commit the offset only if someCondition holds. Otherwise the consumer should sleep for some time and read the same message again. Kafka Configuration: @Bean public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String>

EmbeddedKafka how to check received messages in unit test

丶灬走出姿态 提交于 2019-12-01 17:58:38
I created a spring boot application that sends messages to a Kafka topic. I am using spring spring-integration-kafka : A KafkaProducerMessageHandler<String,String> is subscribed to a channel ( SubscribableChannel ) and pushes all messages received to one topic. The application works fine. I see messages arriving in Kafka via console consumer (local kafka). I also create an Integrationtest that uses KafkaEmbedded . I am checking the expected messages by subscribing to the channel within the test - all is fine. But i want the test to check also the messages put into kafka. Sadly Kafka's JavaDoc