Stateful-Retry with DeadLetterPublishingRecoverer causing RetryCacheCapacityExceededException

情到浓时终转凉″ 提交于 2019-12-11 12:08:54

问题


My container factory has a SeekToCurrentErrorHandler that uses a DeadLetterPublishingRecoverer to publish to a DLT, certain 'NotRetryableException' type exceptions and keep seeking the same offset for other kind of exceptions infinite number of times. With this setup, after a certain amount of payloads that result in non-retryable exceptions, the map that stores the retry context - MapRetryContextCache (spring-retry) overflows throwing a RetryCacheCapacityExceededException. From the initial looks it, retry-contexts of messages to be handled by the DLT recoverer are not being removed from MapRetryContextCache. Either that or my configuration is incorrect.

SeekToCurrentErrorHandler eh = new SeekToCurrentErrorHandler(
                new DeadLetterPublishingRecoverer(kafkaTemplate),-1);
eh.addNotRetryableException(SomeNonRetryableException.class);
        eh.setCommitRecovered(true);
        ConcurrentKafkaListenerContainerFactory<String, String> factory
                = getContainerFactory();
        factory.setErrorHandler(eh);
        factory.setRetryTemplate(retryTemplate);
        factory.setStatefulRetry(true);

回答1:


In order to clear the cache, you must do the recovery in the retry template, not in the error handler.

@SpringBootApplication
public class So56846940Application {

    public static void main(String[] args) {
        SpringApplication.run(So56846940Application.class, args);
    }

    @Bean
    public NewTopic topic() {
        return TopicBuilder.name("so56846940").partitions(1).replicas(1).build();
    }

    @Bean
    public NewTopic topicDLT() {
        return TopicBuilder.name("so56846940.DLT").partitions(1).replicas(1).build();
    }

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template,
            ConcurrentKafkaListenerContainerFactory<String, String> factory,
            DeadLetterPublishingRecoverer recoverer) {

        factory.setRetryTemplate(new RetryTemplate());
        factory.setStatefulRetry(true);
        factory.setRecoveryCallback(context -> {
            recoverer.accept((ConsumerRecord<?, ?>) context.getAttribute("record"),
                    (Exception) context.getLastThrowable());
            return null;
        });

        return args -> IntStream.range(0, 5000).forEach(i -> template.send("so56846940", "foo"));
    }

    @KafkaListener(id = "so56846940", topics = "so56846940")
    public void listen(String in) {
        System.out.println(in);
        throw new RuntimeException();
    }

    @Bean
    public DeadLetterPublishingRecoverer recoverer(KafkaTemplate<String, String> template) {
        return new DeadLetterPublishingRecoverer(template);
    }

    @Bean
    public SeekToCurrentErrorHandler eh() {
        return new SeekToCurrentErrorHandler(4);
    }

}

The error handler must retry at least as many times as the retry template so that the retries are exhausted and we clear the cache.

You should also configure the RetryTemplate with the same not retryable exceptions as the error handler.

We will clarify in the reference manual.



来源:https://stackoverflow.com/questions/56846940/stateful-retry-with-deadletterpublishingrecoverer-causing-retrycachecapacityexce

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!