kafka-consumer-api

How to fix not receiving kafka messages in python but receiving the same messages in shell?

亡梦爱人 提交于 2019-12-02 11:28:19
I want to consume messages coming in a kafka topic. I am using debezium which oplogs the mongodb changes and puts them in the kafka queue. I am able to connect to kafka using my python code, list the kafka topics. Although, when I want to consume the messages, its all blank whereas the same topic when consumed from the shell gives messages, performs perfectly. from kafka import KafkaConsumer topic = "dbserver1.inventory.customers" # consumer = KafkaConsumer(topic, bootstrap_servers='localhost:9092', auto_offset_reset='earliest', auto_commit_enable=True) consumer = KafkaConsumer(topic) print(

How to specific a Java Generic class dynamicly

无人久伴 提交于 2019-12-02 07:17:18
If I specific a method which return a generic class,how can I do than I can specific the type of generic class dynamicly ? for example try { Class c =Class.forName(keytype); Class d= Class.forName(valuetype); KafkaConsumer<c,d> consumerconsumer = new KafkaConsumer<c,d>(PropertiesUtil.getPropsObj(configPath)); return consumer ; } catch (ClassNotFoundException e) { e.printStackTrace(); } } But the code above is not OK. How can I do than I can achieve that? Generic syntax is good at compile-time only. None of the types in a generic class or method are available at runtime. This is called type

How to check which partition is a key assign to in kafka?

谁说我不能喝 提交于 2019-12-02 05:52:28
I am trying to debug a issue for which I am trying to prove that each distinct key only goes to 1 partition if the cluster is not rebalancing. So I was wondering for a given topic, is there a way to determine which partition a key is send to? As explained here or also in the source code You need the byte[] keyBytes assuming it isn't null, then using org.apache.kafka.common.utils.Utils , you can run the following. Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions; For strings or JSON, it's UTF8 encoded, and the Utils class has helper functions to get that. For Avro, such as Confluent

Kafka Consumers throwing java.lang.OutOfMemoryError: Direct buffer memory

大兔子大兔子 提交于 2019-12-02 02:11:00
问题 I am using single node Kafka broker (0.10.2) and single node zookeeper broker (3.4.9). I am having a consumer server ( single core and 1.5 GB RAM ). Whenever I am running a process with 5 or more threads my consumer's threads are getting killed after throwing these exceptions Exception 1 java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) at org.apache.kafka.common.network.NetworkReceive

Kafka Consumers throwing java.lang.OutOfMemoryError: Direct buffer memory

让人想犯罪 __ 提交于 2019-12-02 01:52:36
I am using single node Kafka broker (0.10.2) and single node zookeeper broker (3.4.9). I am having a consumer server ( single core and 1.5 GB RAM ). Whenever I am running a process with 5 or more threads my consumer's threads are getting killed after throwing these exceptions Exception 1 java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93) at org.apache.kafka.common.network.NetworkReceive.readFrom

NoBrokersAvailable: NoBrokersAvailable-Kafka Error

99封情书 提交于 2019-12-02 00:43:52
问题 i have already started to learn Kafka. Trying basic operations on it. I have stucked on a point which about the 'Brokers'. My kafka is running but when i want to create a partition. from kafka import TopicPartition (ERROR THERE) consumer = KafkaConsumer(bootstrap_servers='localhost:1234') consumer.assign([TopicPartition('foobar', 2)]) msg = next(consumer) traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/group.py", line 284, in

NoBrokersAvailable: NoBrokersAvailable-Kafka Error

不打扰是莪最后的温柔 提交于 2019-12-01 22:23:39
i have already started to learn Kafka. Trying basic operations on it. I have stucked on a point which about the 'Brokers'. My kafka is running but when i want to create a partition. from kafka import TopicPartition (ERROR THERE) consumer = KafkaConsumer(bootstrap_servers='localhost:1234') consumer.assign([TopicPartition('foobar', 2)]) msg = next(consumer) traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/dist-packages/kafka/consumer/group.py", line 284, in init self._client = KafkaClient(metrics=self._metrics, **self.config) File "/usr/local/lib/python2.7

Reading the same message several times from Kafka

给你一囗甜甜゛ 提交于 2019-12-01 20:03:10
I use Spring Kafka API to implement Kafka consumer with manual offset management: @KafkaListener(topics = "some_topic") public void onMessage(@Payload Message message, Acknowledgment acknowledgment) { if (someCondition) { acknowledgment.acknowledge(); } } Here, I want the consumer to commit the offset only if someCondition holds. Otherwise the consumer should sleep for some time and read the same message again. Kafka Configuration: @Bean public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String>

Is kafka consumer 0.9 backward compatible?

一曲冷凌霜 提交于 2019-12-01 17:09:50
Is the upcoming kafka consumer 0.9.x going to be compatible with 0.8 broker? In other words - it is possible to only switch to new consumer implementation, without touching anything else? According to the documentation of Kafka 0.9.0 , you can not use the new consumer for reading data from 0.8.x brokers. The reason is the following: 0.9.0.0 has an inter-broker protocol change from previous versions. No. In general it's recommended to upgrade brokers before clients since brokers target backwards compatibility. The 0.9 broker will work with both the 0.8 consumer and 0.9 consumer APIs but not the

Kafka __consumer_offsets growing in size

纵饮孤独 提交于 2019-12-01 14:25:12
We are using Kafka as a Strictly Ordered Queue and hence a single topic/single partition/single consumer group combo is in use. I should be able to use multiple partition later in future. My consumer is spring-boot app listener, that produces and consumes from the same topic(s). So the consumer group is fixed and there is always a single consumer. Kafka version 0.10.1.1 In such scenario the Log file for topic-0 and a few __consumer_offsets_XX grows. In fact __consumer_offsets_XX grows very high, even though it is supposed to be cleared periodically every 60 minutes (by default). The consumer