Kafka consumer offsets out of range with no configured reset policy for partitions

情到浓时终转凉″ 提交于 2019-12-10 01:24:07

问题


I'm receiving exception when start Kafka consumer.

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}

I'm using Kafka version 9.0.0 with Java 7.


回答1:


So you are trying to access offset(29898318) in topic(test) partition(0) which is not available right now.

There could be two cases for this

  1. Your topic partition 0 may not have those many messages
  2. Your message at offset 29898318 might have already deleted by retention period

To avoid this you can do one of following:

  1. Set auto.offset.reset config to either smallest or largest . You can find more info regarding this here
  2. You can get smallest offset available for a topic partition by running following Kafka command line tool

command:

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2

Hope this helps!




回答2:


I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:

  • cleanup.policy=cleanup,delete
  • retention of 4 days

If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)

Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.



来源:https://stackoverflow.com/questions/37320643/kafka-consumer-offsets-out-of-range-with-no-configured-reset-policy-for-partitio

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!