spring-kafka

EmbeddedKafka how to check received messages in unit test

有些话、适合烂在心里 提交于 2019-12-01 16:57:58
问题 I created a spring boot application that sends messages to a Kafka topic. I am using spring spring-integration-kafka : A KafkaProducerMessageHandler<String,String> is subscribed to a channel ( SubscribableChannel ) and pushes all messages received to one topic. The application works fine. I see messages arriving in Kafka via console consumer (local kafka). I also create an Integrationtest that uses KafkaEmbedded . I am checking the expected messages by subscribing to the channel within the

Kafka __consumer_offsets growing in size

纵饮孤独 提交于 2019-12-01 14:25:12
We are using Kafka as a Strictly Ordered Queue and hence a single topic/single partition/single consumer group combo is in use. I should be able to use multiple partition later in future. My consumer is spring-boot app listener, that produces and consumes from the same topic(s). So the consumer group is fixed and there is always a single consumer. Kafka version 0.10.1.1 In such scenario the Log file for topic-0 and a few __consumer_offsets_XX grows. In fact __consumer_offsets_XX grows very high, even though it is supposed to be cleared periodically every 60 minutes (by default). The consumer

Tombstone messages not removing record from KTable state store?

ぃ、小莉子 提交于 2019-12-01 08:02:33
I am creating KTable processing data from KStream. But when I trigger a tombstone messages with key and null payload, it is not removing message from KTable. sample - public KStream<String, GenericRecord> processRecord(@Input(Channel.TEST) KStream<GenericRecord, GenericRecord> testStream, KTable<String, GenericRecord> table = testStream .map((genericRecord, genericRecord2) -> KeyValue.pair(genericRecord.get("field1") + "", genericRecord2)) .groupByKey() reduce((genericRecord, v1) -> v1, Materialized.as("test-store")); GenericRecord genericRecord = new GenericData.Record(getAvroSchema(keySchema

How to start spring application even if Kafka listener (spring-kafka) doesn't initialize

。_饼干妹妹 提交于 2019-12-01 06:54:11
问题 I'm working on an application that uses a Kafka listener using spring-kafka. The problem I'm facing is that the spring context initialization fails when the Kafka listener doesn't turn on (Various reasons, such as Kafka server is not turned on or is down). How can I make sure that my application is independent. Can anyone please help. 回答1: Set autoStartup(false) on the container factory. Inject (e.g. @Autowired ) the KafkaListenerEndpointRegistry and start() it in your code (in a try/catch).

Spring kafka consumer, seek offset at runtime?

依然范特西╮ 提交于 2019-12-01 05:37:04
I am using the KafkaMessageListenerContainer for consuming from the kafka topic, I have an application logic to process each record which is dependent on other micro services as well. I am now manually committing the offset after each record is processed. But if I the application logic fails I need to seek to the failed offset and keep processing it until it's succeeds. For that I need to do a run time manual seek of the last offset. Is this possible with the KafkaMessageListenerContainer yet ? See Seeking to a Specific Offset . In order to seek, your listener must implement ConsumerSeekAware

How to write Kafka consumers - single threaded vs multi threaded

此生再无相见时 提交于 2019-12-01 03:34:21
I have written a single Kafka consumer (using Spring Kafka), that reads from a single topic and is a part of a consumer group. Once a message is consumed, it will perform all downstream operations and move on to the next message offset. I have packaged this as a WAR file and my deployment pipeline pushes this out to a single instance. Using my deployment pipeline, I could potentially deploy this artifact to multiple instances in my deployment pool. However, I am not able to understand the following, when I want multiple consumers as part of my infrastructure - I can actually define multiple

Problems adding multiple KafkaListenerContainerFactories

你离开我真会死。 提交于 2019-11-30 20:49:06
Hi I'm currently dabbling in Spring Kafka and succeeded in adding a single KafkaListenerContainerFactory to my listener. Now I'd like to add multiple KafkaListenerContainerFactorys (One for a topic that will have messages in json, another one for strings). See code below: @EnableKafka @Configuration public class KafkaConsumersConfig { private final KafkaConfiguration kafkaConfiguration; @Autowired public KafkaConsumersConfig(KafkaConfiguration kafkaConfiguration) { this.kafkaConfiguration = kafkaConfiguration; } @Bean public KafkaListenerContainerFactory<?> kafkaJsonListenerContainerFactory(){

Problems adding multiple KafkaListenerContainerFactories

旧时模样 提交于 2019-11-30 17:00:31
问题 Hi I'm currently dabbling in Spring Kafka and succeeded in adding a single KafkaListenerContainerFactory to my listener. Now I'd like to add multiple KafkaListenerContainerFactorys (One for a topic that will have messages in json, another one for strings). See code below: @EnableKafka @Configuration public class KafkaConsumersConfig { private final KafkaConfiguration kafkaConfiguration; @Autowired public KafkaConsumersConfig(KafkaConfiguration kafkaConfiguration) { this.kafkaConfiguration =

How to implement a microservice Event Driven architecture with Spring Cloud Stream Kafka and Database per service

蓝咒 提交于 2019-11-30 13:26:40
问题 I am trying to implement an event driven architecture to handle distributed transactions. Each service has its own database and uses Kafka to send messages to inform other microservices about the operations. An example: Order service -------> | Kafka |------->Payment Service | | Orders MariaDB DB Payment MariaDB Database Order receives an order request. It has to store the new Order in its DB and publish a message so that Payment Service realizes it has to charge for the item: private

Why does a Kafka consumer take a long time to start consuming?

喜欢而已 提交于 2019-11-30 09:10:51
We start a Kafka consumer, listening on a topic which may not yet be created (topic auto creation is enabled though). Not long thereafter a producer is publishing messages on that topic. However, it takes some time for the consumer to notice this: 5 minutes to be exact. At this point the consumer revokes its partitions and rejoins the consumer group. Kafka re-stabilizes the group. Looking at the time-stamps of the consumer vs. kafka logs, this process is initiated at the consumer side. I suppose this is expected behavior but I would like to understand this. Is this actually a re-balancing