I'm receiving exception when start Kafka consumer.
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}
I'm using Kafka version 9.0.0 with Java 7.
So you are trying to access offset(29898318
) in topic(test
) partition(0
) which is not available right now.
There could be two cases for this
- Your topic partition
0
may not have those many messages - Your message at offset
29898318
might have already deleted by retention period
To avoid this you can do one of following:
- Set
auto.offset.reset
config to eithersmallest
orlargest
. You can find more info regarding this here - You can get
smallest offset
available for a topic partition by running following Kafka command line tool
command:
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2
Hope this helps!
I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:
cleanup.policy=cleanup,delete
- retention of 4 days
If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)
Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.
来源:https://stackoverflow.com/questions/37320643/kafka-consumer-offsets-out-of-range-with-no-configured-reset-policy-for-partitio