Storm KafkaSpout stopped to consume messages from Kafka Topic

我的梦境 提交于 2019-12-05 15:57:10

With the help from the stom mail list I was able to tune KafkaSpout and resolve the issue. The following settings work for me.

config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 2048);
config.put(Config.TOPOLOGY_BACKPRESSURE_ENABLE, false);
config.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE, 16384);
config.put(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE, 16384);

I tested by sending 20k-50k batches with 1sec pause between bursts. Each message was 2048 bytes.

I am running 3 node cluster, my topology has 4 spouts and topic has 64 partitions.

After 200M messages its still working....

  1. Check if the producer is actually writing to the topic you expect.
  2. Make sure that the spouts can reach Kafka, at the network level. You can check it using Telnet command.
  3. Can spouts reach Zookeeper? Check it again using Telnet.

Source: KafkaSpout is not receiving anything from Kafka

If above three are true, then:

Kafka has fixed retention window for topics. If the retention is full, it will drop the messages from the tail.

So here what 'might' be happening : the rate at which you are pushing the data to kafka is faster than the rate at which the consumers can consume the messages.

Source : Storm-kafka spout not fast enough to process the information

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!