kafka-consumer-api

Getting Kafka usage details

纵饮孤独 提交于 2019-12-11 17:48:39
问题 I am trying to find ways to get current usage statistics for my kafka cluster. I am looking to collect following information: Number of topics in kafka cluster Number of partitions per kafka broker Number of active consumers and producers Number of client connections per kafka broker Number of messages on each partition, size of disk etc. Lagging replicas, consumer lag etc. Active consumer groups Any other statistics that can and should be collected, currently I am looking at collecting the

Spring Cloud Stream for Kafka with consumer/producer API exactly once semantics with transaction-id-prefix is not working as expected

烈酒焚心 提交于 2019-12-11 17:22:32
问题 I have scenario where am seeing different behavior. Like total of 3 different services First service will listen from Solace queue and produce it to kafka topic-1 (where transaction are enabled) Second Service will listen from above kafka topic-1 and write it to another kafka topic-2 (where we have no manual commits, transactions enabled to produce to other topic, auto commit offset as false & isolation.level is set to read_commited) ago Delete Third Service will listen from kafka topic-2 and

Kafka command-line consumer reads, but cannot read through Java

我是研究僧i 提交于 2019-12-11 17:05:48
问题 I have manually created topic test with this command: bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test and using this command: bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test I inserted these records: This is a message This is another message This is a message2 First, I consume messages through the command line like this: bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from

“Commit failed for offsets” while committing offset asynchronously

落花浮王杯 提交于 2019-12-11 16:58:47
问题 I have a kafka consumer from which I am consuming data from a particular topic and I am seeing below exception. I am using 0.10.0.0 kafka version. LoggingCommitCallback.onComplete: Commit failed for offsets= {....}, eventType= some_type, time taken= 19ms, error= org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was

How To create a kafka topic from java for KAFKA-2.1.1-1.2.1.1?

Deadly 提交于 2019-12-11 16:56:59
问题 I am working on java interface which would take user input of topic name, replication and partition to create a kafka topic in KAFKA-2.1.1-1.2.1.1. This is code that I have used from other sources but it seems to be for previous version of kafka import kafka.admin.AdminOperationException; import org.I0Itec.zkclient.ZkClient; import org.I0Itec.zkclient.ZkConnection; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.admin.AdminUtils; import kafka.utils

Stream data using Spark from a partiticular partition within Kafka topics

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-11 15:45:52
问题 I have already seen a similar question as clickhere But still I want to know if streaming data from a particular partition not possible? I have used Kafka Consumer Strategies in Spark Streaming subscribe method . ConsumerStrategies.Subscribe[String, String](topics, kafkaParams, offsets) This is the code snippet I tried out for subscribing to topic and partition, val topics = Array("cdc-classic") val topic="cdc-classic" val partition=2; val offsets= Map(new TopicPartition(topic, partition) ->

Apache kafka - consumer delay option

为君一笑 提交于 2019-12-11 15:14:50
问题 I want to start a consumer in kafka for a particular topic in a small delay. In detail , I want the consumer to start consuming the messages from the topic after a particular time delay from the time of producing the messages . Can anyone say is there any property or option in kafka to enable it . Thanks in advance. 回答1: We did the same stuff for spark-streaming. I hope, the approach can suits you also. The idea is very simple - use Thread.sleep . When you receive new message from kafka, you

messages lost between Apache kafka consumer stop and start

被刻印的时光 ゝ 提交于 2019-12-11 15:14:37
问题 I am new to kafka and using Apache kafka consumer to read messages from producer. But when I stop and start for certain time. All the produced messages between are lost. how to handle this scenario. I am using these properties "auto.offset.reset", "latest" and "enable.auto.commit", "false" . This is the code I am using.Any help is appreciated. Properties props = new Properties(); props.put("bootstrap.servers", localhost:9092); props.put("group.id", "service"); props.put("enable.auto.commit",

Kafka - Know if Consumer is up to date

柔情痞子 提交于 2019-12-11 11:58:56
问题 I am using Kafka 0.9.0 with the native Java consumer client. If I have 1 Topic with 1 Partition Can someone tell me if I do: seekToEnd(MyTopic); poll(x); I will only get the last record, hence I will know that I am in the last position? 回答1: Yes, you will only get the last (newest) record, because the seekToEnd() method " evaluates lazily " so the end is not calculated until poll() is called. Of course, by the time the poll() method returns, more messages could have been added; so there is no

Does kafka lose message if consumer holds message longer then auto commit interval time?

拈花ヽ惹草 提交于 2019-12-11 09:03:28
问题 Say if auto-commit interval time is 30 seconds, consumer for some reasons could not process the message and hold it longer than 30 seconds then crash. does the auto-commit offset mechanism commits this offset anyway right before consumer crash? If my assumption is correct, the message is lost as its offset committed but the message itself has not been processed? 回答1: Lets consider your Consumer group name is Test and you have a single consumer in the Consumer Group. When Auto-Commit is