kafka-consumer-api

Apache Kafka get list of consumers on a specific topic

做~自己de王妃 提交于 2019-12-06 04:57:51
As it can be guest from the title, is there a way to get the consumer list on a specific topic in java? Untill now Im able to get the list of topics like this final ListTopicsResult listTopicsResult = adminClient.listTopics(); KafkaFuture<Set<String>> kafkaFuture = listTopicsResult.names(); Set<String> map = kafkaFuture.get(); but I havent found a way to get the list of consumers on each topic I was recently solving the same problem for my kafka client tool. It is not easy, but the only way, which I found from the code is the following: Properties props = ...//here you put your properties

Re-processing/reading Kafka records/messages again - What is the purpose of Consumer Group Offset Reset?

时间秒杀一切 提交于 2019-12-06 03:43:14
My kafka topic has 10 records/messages in total and 2 partitions having 5 messages each. My consumer group has 2 consumers and each of the consumer has already read 5 messages from their assigned partition respectively. Now, I want to re-process/read messages from my topic from start/beginning (offset 0). I stopped my kafka consumers and ran following command to reset consumer group offset to 0. ./kafka-consumer-groups.sh --group cg1 --reset-offsets --to-offset 0 --topic t1 --execute --bootstrap-server "..." My expectation was that once I restart my kafka consumers they will start reading

kafka new version 2.1.0 broker hangs for no reason

流过昼夜 提交于 2019-12-06 02:40:37
At first all the brokers in cluster can start and work just fine, but sometimes one of the broker will meet problem. And there are some phenomenon will show up: whole cluster is hang, nor producer and consumer are not work, hence the network flow is down to zero from monitor; use kafka-topic.sh describe the topic message, every replica is just fine, even the exceptional brokerid, and information in zk also normal; the file-description number increase gradually on abnormal broker, which is read from /proc/sys/fs/file-nr netstat the broker listen port 9092 display lots of "CLOSE_WAIT" status

Kafka Consumer outputs excessive DEBUG statements to console (ecilpse)

喜欢而已 提交于 2019-12-05 20:50:11
问题 I'm running some sample code from http://www.javaworld.com/article/3060078/big-data/big-data-messaging-with-kafka-part-1.html?page=2, and the kafkaconsumer is consuming from topic as desired, but every poll results in print (to std out) of many debug logs, which I don't want. I have tried changing all INFO and DEBUG to ERROR (even did a grep to make sure) in /config/log4j.properties , in particular setting log4j.logger.kafka=ERROR , kafkaAppender, but the problem persists. I referred to How

How can I initialize kafka ConsmerRecords<String,String> in kafka for testing

时光毁灭记忆、已成空白 提交于 2019-12-05 11:02:28
I am writing test cases for kafka consumer components and mocking kafkaConsumer.poll() which returns instance of ConsumerRecords<String,String> . I want to initialize ConsumerRecords and use that in mock but the constructors of ConsumerRecords expect actual kafka topic which I don't have in tests. One way I think for this is by keeping a serialized copy of object and deserialize to initialize ConsumerRecords . Is there any other way to achieve the same. Here is some example code (Kafka clients lib version 0.10.1.1): import java.util.ArrayList; import java.util.Collection; import java.util

Request messages between two timestamps from Kafka

老子叫甜甜 提交于 2019-12-05 10:54:51
Is it possible to consume messages from Kafka based on a time period in which the messages were ingested? Example : I want all messages ingested to a topic between 0900-1000 today (and now it's 1200). If there is only a way to specify a start time, that's fine - my consumer can stop processing messages once it reaches the end time. I can see methods for requesting messages from a given offset, and for getting the first available offset, and for the earliest available offset, but not all messages after a given time. You could use the offsetsForTimes method which returns you offset whose

How to use Consumer API of Kafka 0.8.2?

自作多情 提交于 2019-12-05 10:41:33
问题 I'm getting start with the latest Kafka document http://kafka.apache.org/documentation.html. But I meet some problem when I try to use the new Consumer API. I've done the job with following steps: 1. Add a new dependency <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.8.2.1</version> </dependency> 2. Add configurations Map<String, Object> config = new HashMap<String, Object>(); config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "host:9092");

Seeing “partition doesn't exist” warnings/failures after kafka using kafka partition re-assignment tool

非 Y 不嫁゛ 提交于 2019-12-05 10:29:15
I am using kafka 0.8.1.1. I have a 3 node kafka cluster with some topics having around 5 partitions. I planned to increase the number of nodes to 5 in cluster and moving some partitions from existing topics to the new brokers. Previous partition state: broker1 : topic1 { partition 0 } broker2 : topic1 { partition 1,2} broker3 : topic1 { partition 3,4} New intended state: broker1 : topic1 { partition 0} broker2 : topic1 { partition 1} broker3 : topic1 { partition 3} broker4 : topic1 { partition 4} broker5 : topic1 { partition 2} command which I used: bin/kafka-reassign-partitions.sh -

Kafka pattern subscription. Rebalancing is not being triggered on new topic

℡╲_俬逩灬. 提交于 2019-12-05 09:56:35
According to the documentation on kafka javadocs if I: Subscribe to a pattern Create a topic that matches the pattern A rebalance should occur, which makes the consumer read from that new topic. But that's not happening. If I stop and start the consumer, it does pick up the new topic. So I know the new topic matches the pattern. There's a possible duplicate of this question in https://stackoverflow.com/questions/37120537/whitelist-filter-in-kafka-doesnt-pick-up-new-topics but that question got nowhere. I'm seeing the kafka logs and there are no errors, it just doesn't trigger a rebalance. The

Apache Kafka with High Level Consumer: Skip corrupted messages

馋奶兔 提交于 2019-12-05 08:08:47
问题 I'm facing an issue with high level kafka consumer (0.8.2.0) - after consuming some amount of data one of our consumers stops. After restart it consumes some messages and stops again with no error/exception or warning. After some investigation I found that the problem with consumer was this exception: ERROR c.u.u.e.impl.kafka.KafkaConsumer - Error consuming message stream: kafka.message.InvalidMessageException: Message is corrupt (stored crc = 3801080313, computed crc = 2728178222) Any ideas