问题
Below code snippet :
KafkaConsumerConfig class
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9093");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "consumerGroupId");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 10000);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 60000);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 1000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
ConsumerFactory config = consumerFactory();
factory.setConsumerFactory(config);
factory.getContainerProperties().setCommitLogLevel(LogIfLevelEnabled.Level.INFO);
factory.setConcurrency(kafka.getConcurrency());
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setSyncCommits(true);
factory.getContainerProperties().setPollTimeout(0);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setConsumerRebalanceListener(new RebalanceListener());
return factory;
}
RebalanceListener class
public class RebalanceListener implements ConsumerAwareRebalanceListener {
private Map<TopicPartition, Long> partitionToUncommittedOffsetMap;
public void setPartitionToUncommittedOffsetMap(Map<TopicPartition, Long> partitionToUncommittedOffsetMap) {
this.partitionToUncommittedOffsetMap = partitionToUncommittedOffsetMap;
}
private void commitOffsets(Map<TopicPartition, Long> partitionToOffsetMap, Consumer consumer) {
if(partitionToOffsetMap!=null && !partitionToOffsetMap.isEmpty()) {
Map<TopicPartition, OffsetAndMetadata> partitionToMetadataMap = new HashMap<>();
for(Map.Entry<TopicPartition, Long> e : partitionToOffsetMap.entrySet()) {
log.info("Adding partition & offset for topic{}", e.getKey());
partitionToMetadataMap.put(e.getKey(), new OffsetAndMetadata(e.getValue() + 1));
}
log.info("Consumer : {}, committing the offsets : {}", consumer, partitionToMetadataMap);
consumer.commitSync(partitionToMetadataMap);
partitionToOffsetMap.clear();
}
}
@Override
public void onPartitionsRevokedBeforeCommit(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
log.info("Consumer is going to commit the offsets {}",consumer);
commitOffsets(partitionToUncommittedOffsetMap, consumer);
log.info("Committed offsets {}",consumer);
}
KafkaListner class
@KafkaListener(topics = "#{'${dimebox.kafka.topicName}'.split('"+ COMMA + "')}", groupId ="${dimebox.kafka.consumerGroupId}")
public void receive(@Header(KafkaHeaders.RECEIVED_TOPIC) String topic,@Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
@Header(KafkaHeaders.OFFSET) long offset, Acknowledgment acknowledgment, final String payload) {
TopicPartition tp = new TopicPartition(topic, partition);
Map<TopicPartition, Long> partitionToUncommittedOffsetMap = new ConcurrentHashMap<>();
partitionToUncommittedOffsetMap.put(tp, offset);
((RebalanceListener)consumerConfig.kafkaListenerContainerFactory
(new ApplicationProperties()).getContainerProperties().getConsumerRebalanceListener())
.setPartitionToUncommittedOffsetMap(partitionToUncommittedOffsetMap);
LOGGER.info("Insert Message Received from offset : {} ", offset);
importerService.importer(payload);
acknowledgment.acknowledge();
}
After this configuration, Kafka consumer stopped abruptly. we process the message from Kafka and downstream api is returning error and exception is thrown. Post that the execution if the normal flow to process the messages is getting stopped. please note that the exception is handled at the application level and logged. Do we need to use the specific error handling methods provided by the spring kafka library?
来源:https://stackoverflow.com/questions/58957757/kafka-consumer-stopped-abruptly-after-getting-exception