kafka-consumer-api

Kafka how to read from __consumer_offsets topic

不问归期 提交于 2019-12-17 07:19:52
问题 I'm trying to find out which offsets my current High-Level consumers are working off. I use Kafka 0.8.2.1, with no "offset.storage" set in the server.properties of Kafka - which, I think, means that offsets are stored in Kafka. (I also verified that no offsets are stored in Zookeeper by checking this path in the Zk shell: /consumers/consumer_group_name/offsets/topic_name/partition_number ) I tried to listen to the __consumer_offsets topic to see which consumer saves what value of offsets, but

Kafka how to read from __consumer_offsets topic

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-17 07:19:46
问题 I'm trying to find out which offsets my current High-Level consumers are working off. I use Kafka 0.8.2.1, with no "offset.storage" set in the server.properties of Kafka - which, I think, means that offsets are stored in Kafka. (I also verified that no offsets are stored in Zookeeper by checking this path in the Zk shell: /consumers/consumer_group_name/offsets/topic_name/partition_number ) I tried to listen to the __consumer_offsets topic to see which consumer saves what value of offsets, but

Difference between session.timeout.ms and max.poll.interval.ms for Kafka 0.10.0.0 and later versions

强颜欢笑 提交于 2019-12-17 06:25:28
问题 I am unclear why we need both session.timeout.ms and max.poll.interval.ms and when would we use one or the other or both? It seems like both settings indicate the upper bound on the time the coordinator will wait to get the heartbeat from a consumer before assuming it's dead. Also how does it behave for versions 0.10.1.0+ based on KIP-62? 回答1: Before KIP-62, there is only session.timeout.ms (ie, Kafka 0.10.0 and earlier). max.poll.interval.ms is introduced via KIP-62 (part of Kafka 0.10.1 ).

Kafka Failed to update metadata

醉酒当歌 提交于 2019-12-14 03:53:14
问题 I am using Kafka v0.10.1.1 with Spring-boot . I am trying to produce a message in a Kafka topic mobile-user using the below producer code: Topic mobile-user have 5 partitions and 2 replication factor . I have attached my Kafka settings at the end of the question . package com.service; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.kafka

Not able consume messages from remote machine in Kafka

扶醉桌前 提交于 2019-12-14 02:01:34
问题 I have created a kafka topic in one of my machine having IP 192.168.25.50 of topic name test-poc. Then by using kafka-console-producer i have produced message as like below kafka-console-producer --broker-list localhost:9092 --topic test-poc >test message1 >test message2 After that i have downloaded kafka in another machine and tried to consume using following command kafka-console-consumer --bootstrap-server 192.168.25.50:9092 --topic test-poc --from-beginning where 192.168.25.50 is the IP

How does Kafka guarantee message ordering as processed by consumers across partitions?

倖福魔咒の 提交于 2019-12-13 16:17:12
问题 Source: https://kafka.apache.org/intro "By having a notion of parallelism—the partition—within the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool of consumer processes. This is achieved by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. By doing this we ensure that the consumer is the only reader of that partition and consumes the data in order. "

Spark Direct Streaming - consume same message in multiple consumers

余生长醉 提交于 2019-12-13 07:03:20
问题 How to consume Kakfa topic messages in multiple Consumers using Direct Stream approach? Is it possible? Since Direct Stream approach doesn't have Consumer Group concept. What happens, if i pass group.id as kafkaparams for the DirectStream method? The below code works with group.id as Kafka Params also without group.id . Sample Code: val kafkaParams = Map( "group.id" -> "group1", CommonClientConfigs.SECURITY_PROTOCOL_CONFIG -> sasl, ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -> "org.apache

KafkaConsumer resume partition cannot continue to receive uncommitted messages

不羁岁月 提交于 2019-12-13 05:49:33
问题 I'm using one topic, one partition, one consumer, Kafka client version is 0.10. I got two different results: If I paused partition first, then to produce a message and to invoke resume method. KafkaConsumer can poll the uncommitted message successfully. But If I produced message first and didn't commit its offset, then to pause the partition, after several seconds, to invoke the resume method. KafkaConsumer would not receive the uncommitted message. I checked it on Kafka server using kafka

Why won't my Java consumer read the data that I have created?

喜你入骨 提交于 2019-12-13 03:55:39
问题 I am trying to read data from a simple producer that I have made. For some reason whenever I run the consumer, it does not see/produce any of the data I have produced. Can anyone possibly give me any guidance on what to do next? I have included code of my producer and consumer below: Producer: public class AvroProducer { public static void main(String[] args) { String bootstrapServers = "localhost:9092"; String topic = "trackingReportsReceived"; //create Producer properties Properties

Kafka uncommitted message not getting consumed again

烈酒焚心 提交于 2019-12-13 03:46:29
问题 I am processing kafka messages and inserting into kudu table using spark streaming with manual offset commit here is my code. val topicsSet = topics.split(",").toSet val kafkaParams = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> brokers, ConsumerConfig.GROUP_ID_CONFIG -> groupId, ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer], ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer], ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG