spring-kafka

Infinite retries with SeekToCurrentErrorHandler in kafka consumer

随声附和 提交于 2019-12-11 18:04:53
问题 I've configured a kafka consumer with SeekToCurrentErrorHandler in Spring boot application using spring-kafka. My consumer configuration is : @Bean public ConsumerFactory<String, String> consumerFactory() { Map<String, Object> props = new HashMap<>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafkaserver"); props.put(ConsumerConfig.GROUP_ID_CONFIG, "group-id"); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,

Pass list of topics from application yml to KafkaListener

Deadly 提交于 2019-12-11 14:07:56
问题 I have the following application.yml: service: kafka: groupId: 345 consumer: topics: - name: response producer: topics: - name: request1 num-partitions: 5 replication-factor: 1 - name: request2 num-partitions: 3 replication-factor: 1 How can I access the list of topic names using spel for passing to KafkaListener annotation? @KafkaListener(topics = "#{'${service.kafka.consumer.topics.name}'}", containerFactory = "kafkaListenerContainerFactory") public void receive(String payload, @Header

how to configure two instances of Kafka StreamsBuilderFactoryBean in spring boot

狂风中的少年 提交于 2019-12-11 12:12:11
问题 Using spring-boot-2.1.3, spring-kafka-2.2.4, I want to have two streams configurations (e.g. to have different application.id, or connect to different cluster, etc). So I defined the first stream configuration pretty much according to the docs, then added a second one, with a different name, and a second StreamsBuilderFactoryBean (also with a different name): @Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME) public KafkaStreamsConfiguration kStreamsConfigs() {

Stateful-Retry with DeadLetterPublishingRecoverer causing RetryCacheCapacityExceededException

情到浓时终转凉″ 提交于 2019-12-11 12:08:54
问题 My container factory has a SeekToCurrentErrorHandler that uses a DeadLetterPublishingRecoverer to publish to a DLT, certain 'NotRetryableException' type exceptions and keep seeking the same offset for other kind of exceptions infinite number of times. With this setup, after a certain amount of payloads that result in non-retryable exceptions, the map that stores the retry context - MapRetryContextCache (spring-retry) overflows throwing a RetryCacheCapacityExceededException. From the initial

How to get kafka consumer-id for logging

时光毁灭记忆、已成空白 提交于 2019-12-11 10:49:05
问题 In my application i'm using spring-kafka to consume message from kafka server, but from console consumer i get consumer-id of all consumer threads that are active TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID easytest-events 9 247367 247367 0 p3-S14-0-e6a1d3cb-8ab3-435f-9f53-5081a6e8f812 /10.66.56.129 p3-S14-0 Is there a way to get consumer-id through code so that i can compare them 回答1: The consumer-id appears to be the client-id appended with a UUID - so you

KafkaMessageListenerContainer reprocess a message if there is servicedown exception

冷暖自知 提交于 2019-12-11 09:29:24
问题 We want to have a Spring Kafka listener configured in such a way that if any external service is down we don't want to loose the message that's consumed from Kafka. we want to revert it back till it is successfully processed. could you please help with the configuration that I can use to achieve the same. How can I handle it if I consume the message in batches. We are using Kafka 0.9 回答1: I think Retry best fits to your requirements: To retry deliveries, convenient listener adapters -

How @SentTo sends the message to related topic?

瘦欲@ 提交于 2019-12-11 07:28:15
问题 I am using ReplyingKafkaTemplate in my Rest controller to return the synchronous response. I am also setting header REPLY_TOPIC. For listener microservice part, @KafkaListener(topics = "${kafka.topic.request-topic}") @SendTo public Model listen(Model<SumModel,SumResp> request) throws InterruptedException { SumModel model = request.getRequest(); int sum = model.getNumber1() + model.getNumber2(); SumResp resp = new SumResp(sum); request.setReply(resp); request.setAdditionalProperty("sum", sum);

spring-kafka - how to read one topic from the beginning, while reading another one from the end?

微笑、不失礼 提交于 2019-12-11 06:52:37
问题 I'm writing a spring-kafka app, in which I need to read 2 topics: test1 and test2: public class Receiver { private static final Logger LOGGER = LoggerFactory .getLogger(Receiver.class); @KafkaListener(id = "bar", topicPartitions = { @TopicPartition(topic = "test1", partitions = { "0" }), @TopicPartition(topic = "test2", partitions = { "0" })}) public void receiveMessage(String message) { LOGGER.info("received message='{}'", message); } } My config looks like this: @Configuration @EnableKafka

Spring Kafka: Poll for new messages instead of being notified using `onMessage`

£可爱£侵袭症+ 提交于 2019-12-11 06:39:39
问题 I am using Spring Kafka in my project as it seemed a natural choice in a Spring based project to consume Kafka messages. To consume messages, I can make use of the MessageListener interface. Spring Kafka internally takes care to invoke my onMessage method for each new message. However, in my setting I prefer to explicitly poll for new messages and work on them sequentially (which will take a few seconds). As a workaround, I might just block inside my onMessage implementation, or buffer the

How to submit kafka offset after processing batch records

烈酒焚心 提交于 2019-12-11 06:01:42
问题 I'm using spring-kafka and consuming batch records from kafka topic and submitting offset by AbstractMessageListenerContainer.AckMode.BATCH . In my case processing batch records is taking time (approximately 20 seconds), and consumer thread waits until batch process is completed and then do the poll again(submits offset at this poll). In this case i will assign List of records to a thread (name: ProcessThread ) that will process all the records and give result back to consumer thread, and