Consumer not receiving messages, kafka console, new consumer api, Kafka 0.9

落花浮王杯 提交于 2019-12-03 05:33:50

问题


I am doing the Kafka Quickstart for Kafka 0.9.0.0.

I have zookeeper listening at localhost:2181 because I ran

bin/zookeeper-server-start.sh config/zookeeper.properties

I have a single broker listening at localhost:9092 because I ran

bin/kafka-server-start.sh config/server.properties

I have a producer posting to topic "test" because I ran

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
yello
is this thing on?
let's try another
gimme more

When I run the old API consumer, it works by running

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

However, when I run the new API consumer, I don't get anything when I run

bin/kafka-console-consumer.sh --new-consumer --topic test --from-beginning \
    --bootstrap-server localhost:9092

Is it possible to subscribe to a topic from the console consumer using the new api? How can I fix it?


回答1:


I my MAC box I was facing the same issue of console-consumer not consuming any messages when used the command

kafka-console-consumer --bootstrap-server localhost:9095 --from-beginning --topic my-replicated-topic

But when I tried with

kafka-console-consumer --bootstrap-server localhost:9095 --from-beginning --topic my-replicated-topic --partition 0

It happily lists the messages sent. Is this a bug in Kafka 1.10.11?




回答2:


I just ran into this issue and the solution was to delete /brokers in zookeeper and restart the kafka nodes.

bin/zookeeper-shell <zk-host>:2181

and then

rmr /brokers

Not sure why this solves it.

When I enabled debug logging, I saw this error message over and over again in the consumer:

2017-07-07 01:20:12 DEBUG AbstractCoordinator:548 - Sending GroupCoordinator request for group test to broker xx.xx.xx.xx:9092 (id: 1007 rack: null) 2017-07-07 01:20:12 DEBUG AbstractCoordinator:559 - Received GroupCoordinator response ClientResponse(receivedTimeMs=1499390412231, latencyMs=84, disconnected=false, requestHeader={api_key=10,api_version=0,correlation_id=13,client_id=consumer-1}, responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}}) for group test 2017-07-07 01:20:12 DEBUG AbstractCoordinator:581 - Group coordinator lookup for group test failed: The group coordinator is not available. 2017-07-07 01:20:12 DEBUG AbstractCoordinator:215 - Coordinator discovery failed for group test, refreshing metadata




回答3:


For me the solution described in this thread worked - https://stackoverflow.com/a/51540528/7568227

Check if

offsets.topic.replication.factor

(or probably other config parameters related to replication) is not higher than the number of brokers. That was the problem in my case.

There was no need to use --partition 0 anymore after this fix.

Otherwise I recommend to follow the debugging procedure described in the mentioned thread.




回答4:


Was getting the same issue on my Mac. I checked the logs and found the following error.

Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor'). 
This error can be ignored if the cluster is starting up and not all brokers are up yet.

This can be fixed by changing the replication factor to 1. Add the following line in server.properties and restart Kafka/Zookeeper.

offsets.topic.replication.factor=1



回答5:


I got the same problem, now I have figured out.

When you use --zookeeper, it is supposed to be provided with zookeeper address as parameter.

When you use --bootstrap-server, it is supposed to be provided with broker address as parameter.




回答6:


Your localhost is the foo here. if you replace the localhost word for the actual hostname, it should work.

like this:

producer

./bin/kafka-console-producer.sh --broker-list \
sandbox-hdp.hortonworks.com:9092 --topic test

consumer:

./bin/kafka-console-consumer.sh --topic test --from-beginning \    
--bootstrap-server bin/kafka-console-consumer.sh --new-consumer \
--topic test --from-beginning \
--bootstrap-server localhost:9092



回答7:


This problem also impacts ingesting data from the kafka using flume and sink the data to HDFS.

To fix the above issue:

  1. Stop Kafka brokers
  2. Connect to zookeeper cluster and remove /brokers z node
  3. Restart kafka brokers

There is no issue with respect to kafka client version and scala version that we are using the cluster. Zookeeper might have wrong information about broker hosts.

To verify the action:

Create topic in kafka.

$ kafka-console-consumer --bootstrap-server slavenode01.cdh.com:9092 --topic rkkrishnaa3210 --from-beginning

Open a producer channel and feed some messages to it.

$ kafka-console-producer --broker-list slavenode03.cdh.com:9092 --topic rkkrishnaa3210

Open a consumer channel to consume the message from a specific topic.

$ kafka-console-consumer --bootstrap-server slavenode01.cdh.com:9092 --topic rkkrishnaa3210 --from-beginning

To test this from flume:

Flume agent config:

rk.sources  = source1
rk.channels = channel1
rk.sinks = sink1

rk.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
rk.sources.source1.zookeeperConnect = ip-20-0-21-161.ec2.internal:2181
rk.sources.source1.topic = rkkrishnaa321
rk.sources.source1.groupId = flume1
rk.sources.source1.channels = channel1
rk.sources.source1.interceptors = i1
rk.sources.source1.interceptors.i1.type = timestamp
rk.sources.source1.kafka.consumer.timeout.ms = 100
rk.channels.channel1.type = memory
rk.channels.channel1.capacity = 10000
rk.channels.channel1.transactionCapacity = 1000
rk.sinks.sink1.type = hdfs
rk.sinks.sink1.hdfs.path = /user/ce_rk/kafka/%{topic}/%y-%m-%d
rk.sinks.sink1.hdfs.rollInterval = 5
rk.sinks.sink1.hdfs.rollSize = 0
rk.sinks.sink1.hdfs.rollCount = 0
rk.sinks.sink1.hdfs.fileType = DataStream
rk.sinks.sink1.channel = channel1

Run flume agent:

flume-ng agent --conf . -f flume.conf -Dflume.root.logger=DEBUG,console -n rk

Observe logs from the consumer that the message from the topic is written in HDFS.

18/02/16 05:21:14 INFO internals.AbstractCoordinator: Successfully joined group flume1 with generation 1
18/02/16 05:21:14 INFO internals.ConsumerCoordinator: Setting newly assigned partitions [rkkrishnaa3210-0] for group flume1
18/02/16 05:21:14 INFO kafka.SourceRebalanceListener: topic rkkrishnaa3210 - partition 0 assigned.
18/02/16 05:21:14 INFO kafka.KafkaSource: Kafka source source1 started.
18/02/16 05:21:14 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.
18/02/16 05:21:14 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: source1 started
18/02/16 05:21:41 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
18/02/16 05:21:42 INFO hdfs.BucketWriter: Creating /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp
18/02/16 05:21:48 INFO hdfs.BucketWriter: Closing /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp
18/02/16 05:21:48 INFO hdfs.BucketWriter: Renaming /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp to /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920
18/02/16 05:21:48 INFO hdfs.HDFSEventSink: Writer callback called.



回答8:


Can you please try like this:

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic



回答9:


Run the below command from bin:

./kafka-console-consumer.sh --topic test --from-beginning --bootstrap-server localhost:9092

"test" is the topic name




回答10:


Use this:

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

Note: Remove --new-consumer from your command

For reference see here: https://kafka.apache.org/quickstart




回答11:


In my case it didn't worked using either approaches then I also increased the log level to DEBUG at config/log4j.properties, started the console consumer

./bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --from-beginning --topic MY_TOPIC

Then got the log below

[2018-03-11 12:11:25,711] DEBUG [MetadataCache brokerId=10] Error while fetching metadata for MY_TOPIC-3: leader not available (kafka.server.MetadataCache)

The point here is that I have two kafka nodes but one is down, by some reason by default kafka-console consumer will not consume if there is some partition not available because the node is down (the partition 3 in that case). It doesn't happen in my application.

Possible solutions are

  • Startup the down brokers
  • Delete the topic and create it again that way all partitions will be placed at the online broker node



回答12:


replication factor must be at least 3

./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic test 



回答13:


In kafka_2.11-0.11.0.0 the zookeeper server is deprecated and and it is using bootstrap-server, and it will take broker ip address and port. If you give correct broker parameters you will be able to consume messages.

e.g. $ bin/kafka-console-consumer.sh --bootstrap-server :9093 --topic test --from-beginning

I'm using port 9093, for you it may vary.

regards.



来源:https://stackoverflow.com/questions/34844209/consumer-not-receiving-messages-kafka-console-new-consumer-api-kafka-0-9

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!