spring-kafka

Spring cloud stream kafka transaction configuration

不想你离开。 提交于 2021-02-11 15:02:25
问题 I am following this template for Spring-cloud-stream-kafka but got stuck while making the producer method transactional . I have not used kafka earlier so need help with this in case any configuration changes needed in kafka It works well if no transactional configuration added but when transactional configurations are added it gets timed out at startup - 2020-11-21 15:07:55.349 ERROR 20432 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information org.apache

Encrypting and decrypting messages with Spring Kafka

╄→尐↘猪︶ㄣ 提交于 2021-02-11 14:56:43
问题 I am using Spring Kafka and one of the topics contains messages with personal data. Is there any way I can configure Spring Kafka to automatically encrypt messages in a Producer/decrypt messages in a consumer or would I have to do it manually? 回答1: There is nothing built into Spring or Kafka (although you can use SSL on the wire to prevent snooping. For application-level encryption/decryption, you would need to implement it. You can separate the concern from your business logic by using a

Embedded Kafka tests randomly failing

馋奶兔 提交于 2021-02-11 14:06:45
问题 I implemented a bunch of integration tests using EmbededKafka to test one of our Kafka streams application running using spring-kafka framework. The stream application is reading a message from a Kafka topic, it stores it into an internal state store, does some transformation and sends it to another micro service into a requested topic. When the response comes back into the responded topic it retrieves the original message from the state store and depending on some business logic it forwards

Kafka manual ackMode MANUAL_IMMEDIATE what if not acknowledge

ぐ巨炮叔叔 提交于 2021-02-11 13:53:34
问题 I use Spring KafKa anf I set ackMode to MANUAL_IMMEDIATE props.setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE); the scenario is that for some reason my app could not acknowledge ( acknowledgment.acknowledge() ) and just miss it without exception. 1- How can I set consumer retry for missed message 2- How configure to call a function after max retry count that I configured reached 回答1: See the documentation about SeekToCurrentErrorHandlers. When the listener throws an

Testing an Apache Kafka Integration within a Spring Boot Application with JUnit 5 and EmbeddedKafkaBroker

五迷三道 提交于 2021-02-11 13:01:49
问题 I have a simple producer class defined as follows: @Configuration public class MyKafkaProducer { private final static Logger log = LoggerFactory.getLogger(MyKafkaProducer.class); @Value("${my.kafka.producer.topic}") private String topic; @Autowired KafkaTemplate<String, String> kafkaTemplate; public void sendDataToKafka(@RequestParam String data) { ListenableFuture<SendResult<String, String>> listenableFuture = kafkaTemplate.send(topic, data); listenableFuture.addCallback(new

Replay Kafka topic with Server-Sent-Events

Deadly 提交于 2021-02-11 12:35:45
问题 I'm thinking about the following use-case and would like to validate if this approach is conceptually valid. The goal is to expose a long-running Server-Sent-Event (SSE) endpoint in Spring replaying the same Kafka topic for each incoming connection (with some user-specific filtering). The SSE is exposed in this way: @GetMapping("/sse") public SseEmitter sse() { SseEmitter sseEmitter = new SseEmitter(); Executors .newSingleThreadExecutor() .execute(() -> dummyDataProducer.generate() // kafka

RequestReplyFuture<String, String, List<Product>> not mapped, Instead it mapped to ArrayList<LinkedHashMap>

岁酱吖の 提交于 2021-02-11 12:20:36
问题 I am using RequestReplyFuture<String, String, List> to mapped the response to List, the result is something like below @Service public class ProductProducer implements IProductProducer{ private final ReplyingKafkaTemplate<String, String, List<Product>> _replyTemplate; private static final Logger LOG = LoggerFactory.getLogger(ProductProducer.class); public ProductProducer(ReplyingKafkaTemplate<String, String, List<Product>> replyTemplate) { this._replyTemplate = replyTemplate; } @Override

How to run kafka streams effectively with single app instance and single topic partitions?

◇◆丶佛笑我妖孽 提交于 2021-02-10 20:27:40
问题 Current setup - I am streaming data from 16 single partitioned topics and doing KTable-KTable joins and sending an output with aggregated data from all streams. I am also materializing each KTable to local state-store. Scenarios - When I tried running two app instances, I was expecting it kafka streams to run on single instance but for some reason it ran on other instance too. Looks like it can created stream task on other app instance during kafka streams failure on instance#1 during to some

Stop KafkaListener ( Spring Kafka Consumer) after it has read all messages till some specific time

十年热恋 提交于 2021-02-10 15:05:54
问题 I am trying to schedule my consumption process from a single partition topic. I can start it using endpointlistenerregistry.start() but I want to stop it after I have consumed all the messages in current partition i.e. when I reach to last offset in current partition. Production into the topic is done after I have finished the consumption and close it. How should I achieve the assurance that I have read all the messages till the time I started scheduler and stop my consumer ? I am using

Stop KafkaListener ( Spring Kafka Consumer) after it has read all messages till some specific time

…衆ロ難τιáo~ 提交于 2021-02-10 15:02:11
问题 I am trying to schedule my consumption process from a single partition topic. I can start it using endpointlistenerregistry.start() but I want to stop it after I have consumed all the messages in current partition i.e. when I reach to last offset in current partition. Production into the topic is done after I have finished the consumption and close it. How should I achieve the assurance that I have read all the messages till the time I started scheduler and stop my consumer ? I am using