Handle failure in case the Kafka Broker is down

时光总嘲笑我的痴心妄想 提交于 2019-12-11 00:04:08

问题


I have a Kafka broker running where messages are consumed successfully but I want to handle the case where Kafka Broker is down on the Kafka Consumer end.

I have read this thread but came to know that the logs are shown at DEBUG level. I was wondering if I can handle this manually on an event trigger may be because I want to handle the failure of Kafka broker myself. Does Spring Kafka provide something to handle this situation?

Please tell me if any more details are needed. I would highly appreciate any suggestions on this that would point me in the right direction. Thanks

EDIT 1:

As answered by @Artem, I've tried this in my KafkaConsumer

@EventListener
public void handleEvent(NonResponsiveConsumerEvent event) {
    LOGGER.info("*****************************************");
    LOGGER.info("Hello NonResponsiveConsumer {}", event);
    LOGGER.info("*****************************************");     
}

This event is triggered ONCE even when Kafka server is running (when I start the application first time). Please see the below logs:

....
....
2017-12-04 13:08:02,177 INFO o.s.c.s.DefaultLifecycleProcessor - Starting beans in phase 0
2017-12-04 13:08:02,218 INFO o.a.k.c.c.ConsumerConfig - ConsumerConfig values: 
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [52.214.67.60:9091]
    check.crcs = true
    client.id = 
    connections.max.idle.ms = 540000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = workerListener
    heartbeat.interval.ms = 3000
    interceptor.classes = null
    internal.leave.group.on.close = true
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 305000
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer

2017-12-04 13:08:02,346 INFO o.a.k.c.u.AppInfoParser - Kafka version : 0.11.0.0
2017-12-04 13:08:02,346 INFO o.a.k.c.u.AppInfoParser - Kafka commitId : cb8625948210849f
2017-12-04 13:08:02,350 INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService 
2017-12-04 13:08:02,363 INFO o.s.b.a.e.j.EndpointMBeanExporter - Located managed bean 'auditEventsEndpoint': registering with JMX server as MBean [org.springframework.boot:type=Endpoint,name=auditEventsEndpoint]
2017-12-04 13:08:02,397 INFO c.t.m.w.c.k.c.KafkaConsumer - *****************************************
2017-12-04 13:08:02,397 INFO c.t.m.w.c.k.c.KafkaConsumer - Hello NonResponsiveConsumer ListenerContainerIdleEvent [timeSinceLastPoll=1.51237491E9s, listenerId=workerListener-0, container=KafkaMessageListenerContainer [id=workerListener-0, clientIndex=-0, topicPartitions=null], topicPartitions=null]
2017-12-04 13:08:02,403 INFO c.t.m.w.c.k.c.KafkaConsumer - *****************************************
....
....

EDIT 2:

Issue resolved by upgrading spring-kafka to 1.3.2


回答1:


Starting with version 1.3.1 there is a:

/**
 * An event that is emitted when a consumer is not responding to
 * the poll; a possible indication that the broker is down.
 *
 * @author Gary Russell
 * @since 1.3.1
 *
 */
@SuppressWarnings("serial")
public class NonResponsiveConsumerEvent extends KafkaEvent {

And quoting Docs:

In addition, if the broker is unreachable (at the time of writing), the consumer poll() method does not exit, so no messages are received, and idle events can’t be generated. To solve this issue, the container will publish a NonResponsiveConsumerEvent if a poll does not return within 3x the pollInterval property. By default, this check is performed once every 30 seconds in each container. You can modify the behavior by setting the monitorInterval and noPollThreshold properties in the ContainerProperties when configuring the listener container. Receiving such an event will allow you to stop the container(s), thus waking the consumer so it can terminate.



来源:https://stackoverflow.com/questions/47524315/handle-failure-in-case-the-kafka-broker-is-down

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!