Kafka consumer is very slow to consume data and only consuming first 500 records

寵の児 提交于 2019-12-11 17:56:13

问题


I am trying to integrate MongoDB and Storm-Kafka, Kafka Producer produces data from MongoDB but it fails to fetch all records from Consumer side. It only consuming 500-600 records out of 1 million records.

There are no errors in log file, topology is still alive but not processing further records.

Kafka version :0.10.* Storm version :1.2.1

Do i need to add any configs in Consumer?

 conf.put(Config.TOPOLOGY_BACKPRESSURE_ENABLE, false);
 conf.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 2048);
 conf.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE, 16384);
 conf.put(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE, 16384);

BrokerHosts hosts = new ZkHosts(zookeeperUrl);
        SpoutConfig spoutConfig = new SpoutConfig(hosts, topic, zkRoot, consumerGroupId);
        spoutConfig.scheme = new KeyValueSchemeAsMultiScheme(new StringKeyValueScheme());
        spoutConfig.fetchSizeBytes = 25000000;
        if (startFromBeginning) {
            spoutConfig.startOffsetTime = OffsetRequest.EarliestTime();
        } else {
            spoutConfig.startOffsetTime = OffsetRequest.LatestTime();
        }
        return new KafkaSpout(spoutConfig);
    }

I want Kafka spout should read all records from kafka topic which are produced by producer.

来源:https://stackoverflow.com/questions/56000556/kafka-consumer-is-very-slow-to-consume-data-and-only-consuming-first-500-records

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!