kafka-consumer-api

Kafka streams.allMetadata() method returns empty list

倖福魔咒の 提交于 2019-12-10 13:38:30
问题 So I am trying to get interactive queries working with Kafka streams. I have Zookeeper and Kafka running locally (on windows). Where I use the C:\temp as the storage folder, for both Zookeeper and Kafka. I have setup the topic like this kafka-topics.bat --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic rating-submit-topic kafka-topics.bat --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic rating-output-topic Reading I have Done

Does Apache Kafka provide an asynchronous subscription callback API?

白昼怎懂夜的黑 提交于 2019-12-10 13:20:06
问题 My project is looking at Apache Kafka as a potential replacement for an aging JMS-based messaging approach. In order to make this transition as smooth as possible, it would be ideal if the replacement queuing system (Kafka) had an asynchronous subscription mechanism, similar to our current project's JMS mechanism of using MessageListener and MessageConsumer to subscribe to topics and receive asynchronous notifications. I don't care so much if Kafka doesn't strictly conform to the JMS API, but

Kafka Offset after retention period

梦想的初衷 提交于 2019-12-10 13:01:06
问题 I have a kafka topic with 1 partition. if it had 100 messages in it the offset would be from 0.99. According the kafka retention policy all of the messages will be wiped out after the specified period. and i am sending 100 new messages to the topic once all have been wiped out(after retention period). Now, where would the new offset of the message start from? is it From 100 or from 0?? I am trying to understand whether the new offsets will be 100-199 or 0-99? 回答1: Kafka honors the log

How to find the offset range for a topic-partition in Kafka 0.10?

故事扮演 提交于 2019-12-10 10:37:47
问题 I'm using Kafka 0.10.0. Before processing, I want to know the size of the records in a partition. In 0.9.0.1 version, I used to find the difference between latest and earliest offset for a partition by using the below code. In the new version, it gets stuck when retrieving the consumer#position method. package org.apache.kafka.example.utils; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util

Kafka CommitFailedException consumer exception

十年热恋 提交于 2019-12-10 03:19:11
问题 After create multiple consumers (using Kafka 0.9 java API) and each thread started, I'm getting the following exception Consumer has failed with exception: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed due to group rebalance class com.messagehub.consumer.Consumer is shutting down. org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed due to group rebalance at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator

Kafka consumer offsets out of range with no configured reset policy for partitions

情到浓时终转凉″ 提交于 2019-12-10 01:24:07
问题 I'm receiving exception when start Kafka consumer. org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318} I'm using Kafka version 9.0.0 with Java 7. 回答1: So you are trying to access offset( 29898318 ) in topic( test ) partition( 0 ) which is not available right now. There could be two cases for this Your topic partition 0 may not have those many messages Your message at offset 29898318 might have

How to configure Kafka topics so an interconnected entity schema can be consumed in the form of events by databases like RDMS and graph

Deadly 提交于 2019-12-09 13:51:32
问题 I have a case where I have Information objects that contain Element objects. If I store an Information object it will try to find preexisting Element objects based on a unique value field otherwise insert them. Information objects and Element objects can't be deleted for now. Adding a parent needs two preexisting Element objects. I was planning to use three topics: CreateElement , CreateInformation , AddParentOfElement for the events Created Element Event , Created Information Event and Added

Kafka consumer stuck in (Re-)joining group

♀尐吖头ヾ 提交于 2019-12-09 05:57:37
问题 Whats the default behavior of kafka (version 0.10) consumer if it tries to rejoin the consumer group. I am using a single consumer for a consumer group but it seems like it got struck at rejoining. After each 10 min it print following line in consumer logs. 2016-08-11 13:54:53,803 INFO o.a.k.c.c.i.ConsumerCoordinator [pool-5-thread-1] ****Revoking previously assigned partitions**** [] for group image-consumer-group 2016-08-11 13:54:53,803 INFO o.a.k.c.c.i.AbstractCoordinator [pool-5-thread-1]

Kafka console consumer ERROR “Offset commit failed on partition”

自闭症网瘾萝莉.ら 提交于 2019-12-09 04:41:08
问题 I am using a kafka-console-consumer to probe a kafka topic. Intermittently, I am getting this error message, followed by 2 warnings: [2018-05-01 18:14:38,888] ERROR [Consumer clientId=consumer-1, groupId=console-consumer-56648] Offset commit failed on partition my-topic-0 at offset 444: The coordinator is not aware of this member. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2018-05-01 18:14:38,888] WARN [Consumer clientId=consumer-1, groupId=console-consumer-56648]

Error reading field 'topics': java.nio.BufferUnderflowException in Kafka

本秂侑毒 提交于 2019-12-09 01:16:49
问题 9.0 client to consume messages from two brokers which are running on a remote system.My producer is working fine and is able to send messages to the broker but my consumer is not able to consume these messages.Consumer and producer are running on my local system and the two brokers are on aws. Whenever I try to run consumer. Following error appears on the broker logs. ERROR Closing socket for /122.172.17.81 because of error (kafka.network.Processor) org.apache.kafka.common.protocol.types