spring-kafka

How to use Spring Kafka's Acknowledgement.acknowledge() method for manual commit

杀马特。学长 韩版系。学妹 提交于 2019-12-03 17:29:36
I am using Spring Kafka first time and I am not able to use Acknowledgement.acknowledge() method for manual commit in my consumer code as mentioned here https://docs.spring.io/spring-kafka/reference/html/_reference.html#committing-offsets . Mine is spring-boot application. If I am not using manual commit process than my code is working fine. But when I use Acknowledgement.acknowledge() for manual commit it shows error related to bean. Also If I am not using manual commit properly please suggest me the right way to do it. Error message: *************************** APPLICATION FAILED TO START **

Spring Kafka - Consume last N messages for partitions(s) for any topic

元气小坏坏 提交于 2019-12-03 13:55:10
问题 I'm trying to read the requested no of kafka messages. For non transactional messages we would seek from endoffset - N for M partitions start polling and collect messages where current offset is less than end offset for each partitions. For idempotent/transactional messages we have to account for transaction markers/duplicate messages and meaning offsets will not be continuous, in such case endoffset - N will not return N messages and we would need go back and seek for more messages until we

java.lang.NoSuchMethodError on Kafka Consumer with spring-kafka 2.1.0 and SpringBoot 1.5.9

一个人想着一个人 提交于 2019-12-03 09:11:08
问题 I am trying to setup Kafka Consumer using SpringBoot(1.5.9) and Spring-kafka(2.1.0). However when I start my app I get java.lang.NoSuchMethodError: org.springframework.util.Assert.state(ZLjava/util/function/Supplier;)V on Kafka MessagingMessageListenerAdapter. I tried with Spring-Kafka(1.2.0) and that error went away. Has anyone else experienced this version incompatibility? Here is my config class @EnableKafka @Configuration public class ImporterConfigs{ static Logger logger = Logger

Spring Cloud Kafka Stream Unable to create Producer Config Error

情到浓时终转凉″ 提交于 2019-12-03 07:18:26
I have two Spring boot project with Kafka-stream dependencies, they have exactly same dependencies in gradle and exactly same configurations, yet one of the project when started logs error as below 11:35:37.974 [restartedMain] INFO o.a.k.c.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [192.169.0.109:6667] client.id = client connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect

Kafka in Kubernetes - Marking the coordinator dead for group

和自甴很熟 提交于 2019-12-03 06:34:48
I am pretty new to Kubernetes and wanted to setup Kafka and zookeeper with it. I was able to setup Apache Kafka and Zookeeper in Kubernetes using StatefulSets. I followed this and this to build my manifest file. I made 1 replica of kafka and zookeeper each and also used persistent volumes. All pods are running and ready. I tried to expose kafka and used Service for this by specifying a nodePort(30010). Seemingly this would expose kafka to the outside world where they can send messages to the kafka broker and also consume from it. But in my Java application, I made a consumer and added the

Synchronising transactions between database and Kafka producer

点点圈 提交于 2019-12-03 05:04:26
问题 We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully

Spring Kafka - Consume last N messages for partitions(s) for any topic

别来无恙 提交于 2019-12-03 03:47:31
I'm trying to read the requested no of kafka messages. For non transactional messages we would seek from endoffset - N for M partitions start polling and collect messages where current offset is less than end offset for each partitions. For idempotent/transactional messages we have to account for transaction markers/duplicate messages and meaning offsets will not be continuous, in such case endoffset - N will not return N messages and we would need go back and seek for more messages until we have N messages for each partitions or beginning offset is reached As there are multiple partitions I

java.lang.NoSuchMethodError on Kafka Consumer with spring-kafka 2.1.0 and SpringBoot 1.5.9

故事扮演 提交于 2019-12-02 23:21:43
I am trying to setup Kafka Consumer using SpringBoot(1.5.9) and Spring-kafka(2.1.0). However when I start my app I get java.lang.NoSuchMethodError: org.springframework.util.Assert.state(ZLjava/util/function/Supplier;)V on Kafka MessagingMessageListenerAdapter. I tried with Spring-Kafka(1.2.0) and that error went away. Has anyone else experienced this version incompatibility? Here is my config class @EnableKafka @Configuration public class ImporterConfigs{ static Logger logger = Logger.getLogger(ImporterConfigs.class); @Value("${kafka.bootstrap-servers}") private static String bootstrapServers;

Synchronising transactions between database and Kafka producer

最后都变了- 提交于 2019-12-02 18:23:39
We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully updated (essentially creating a distributed transaction around the database update and the Kafka update)?

Kafka consumer health check

让人想犯罪 __ 提交于 2019-12-02 16:00:49
问题 Is there a simple way to say if a consumer (created with spring boot and @KafkaListener) is operating normally? This includes - can access and poll a broker, has at least one partition assigned, etc. I see there are ways to subscribe to different lifecycle events but this seems to be a very fragile solution. Thanks in advance! 回答1: You can use the AdminClient to get the current group status... @SpringBootApplication public class So56134056Application { public static void main(String[] args) {