kafka-consumer-api

Unexpected behaviour of NotEnoughReplicasException with min.insync.replicas

杀马特。学长 韩版系。学妹 提交于 2019-12-13 03:25:52
问题 This is a continuation of my previous question I was experimenting kafka's min.insync.replicas and here's the summary: Setup 3 brokers in local, created a topic insync with min.insync.replicas=2 . Messages were produced by kafka-console-producer with acks=all and read by kafka-console-consumer Bought down 2 brokers leaving just 1 insync.replicas and was expecting an exception in producer as mentioned here and here But it never happened and producer was producing messages and consumer was

Consumer group member has no partition

送分小仙女□ 提交于 2019-12-13 03:24:11
问题 I launch two consumers on the same consumer group, i subscribe to 20 topics (each has only one partition) Only on consumer is used : kafka-consumer-groups --bootstrap-server XXXXX:9092 --group foo --describe --members --verbose Note: This will not show information about old Zookeeper-based consumers. CONSUMER-ID HOST CLIENT-ID #PARTITIONS ASSIGNMENT rdkafka-07cbd673-6a16-4d55-9625-7f0925866540 /xxxxx rdkafka 20 arretsBus(0), capteurMeteo(0), capteurPointMesure(0), chantier(0), coworking(0),

How to get consumer Kafka lag in java

佐手、 提交于 2019-12-13 03:24:09
问题 I have a producer in java and consumer in nodeJS. I want to know in java what is the consumer lag, so i know if i can produce more data to the topic. What is the API in java to get the consumer lag? 回答1: Why do you need to know the consumer lag ? The aim of a broker is to produce messages asynchronously. If you need to have a synchronous processing, use a basic rest processing. 回答2: The actual class you can call from Java is the kafka.admin.ConsumerGroupCommand . It's Scala code, but it's

Confluent Control Center Interceptor

梦想的初衷 提交于 2019-12-13 03:19:02
问题 How do I add Confluent Control Center Interceptor to an existing S3(Sink) Connector? To monitor the Sink. I am looking for documentation. Any help is appreciated. 回答1: To be absolutely clear, you need interceptors on your sink and source . If you don't, you can't monitor your pipelines with Confluent Control Center as it stands today. To enable interceptors in Kafka Connect, add to the worker properties file: consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor

consumer consuming the same message twice at the starting only

让人想犯罪 __ 提交于 2019-12-13 03:18:12
问题 At the very first consuming, my consumer is consuming the same message twice, this only happens at the first consuming, after that it only consumes once. Attaching the consumer conf code below. please check for the corrections def __init__(self, group_id, topic='default', bootstrap_servers= ['localhost:9092']): self.topic = topic self.bootstrap_servers = bootstrap_servers self.group_id = group_id self.consumer = KafkaConsumer(## Heading ## self.topic, bootstrap_servers=self.bootstrap_servers,

How can I make consumer request more than 1MB records from Kafka

偶尔善良 提交于 2019-12-12 22:50:56
问题 Whenever my consumer requests a new batch from Kafka, it is requesting always 1MB of data, then it seems to request the next 1MB, and so forth. Does anybody know what the configuration and programming steps are to to receive batches of 20MB? 回答1: You can set the property max.partition.fetch.bytes in the consumer properties to the value you desire (default is 1MB). Also, this value must be equal or greater than max.message.size property in the broker configuration to be sure that your

How to consume data from kafka topic in specific offset to specific offset?

孤者浪人 提交于 2019-12-12 16:34:36
问题 I need to consume specific offset to specific end offset!! consumer.seek() reads the data from specific offset but I need retrieve the data fromoffset to tooffset !! Any help will be appreciate , thanks in advance. ConsumerRecords<String, String> records = consumer.poll(100); if(flag) { consumer.seek(new TopicPartition("topic-1", 0), 90); flag = false; } 回答1: To read messages from a start offset to an end offset, you first need to use seek() to move the consumer at the desired starting

Kafka consumer group offset retention

女生的网名这么多〃 提交于 2019-12-12 11:32:08
问题 How long does kafka store the offset of a consumer after all consumers in that group fail? Is there a configuration variable for this? 回答1: The right property name is: offsets.retention.minutes from https://kafka.apache.org/documentation/#brokerconfigs 回答2: The value can be configured in kafka broker using: offsets.retention.minutes The default is 24 hours. See: the Kafka broker config docs. 回答3: I have added following property in Kafka configuration, it changed the offset retention time to 7

How do I view the consumed messages of Kafka in Nifi?

十年热恋 提交于 2019-12-12 10:21:55
问题 I have started a Nifi process(Consume Kafka) and connected it to a topic. It is running but I am not able to (don't know) where can I view the messages? 回答1: ConsumeKafka processor runs and generates flowfile for each message. Only when you connect a processor to other components like another processor or an output port, will you be able to visualize the data being moved through. For starters you can try this: Connect ConsumeKafka with LogAttribute or any other processor for that matter. Stop

Conditions in which Kafka Consumer (Group) triggers a rebalance

僤鯓⒐⒋嵵緔 提交于 2019-12-12 08:50:00
问题 I was going through the Consumer Config for Kafka. https://kafka.apache.org/documentation/#newconsumerconfigs what are the parameters that will trigger a rebalance ?. For instance the following parameter will ?. Any other parameters which we need to change or will defaults suffice connections.max.idle.ms Close idle connections after the number of milliseconds specified by this config. long 540000 medium Also we have three different topics Is it a bad idea to have the same Consumer Group (Same