kafka-consumer-api

Can multiple Kafka consumers read same message from the partition

拥有回忆 提交于 2019-12-20 16:24:51
问题 We are planning to write a Kafka consumer(java) which reads Kafka queue to perform an action which is in the message. As the consumers run independently, will the message is processed by only one consumer at a time? Else all the consumers process the same message as they have their own offset in the partition. Please help me understand. 回答1: It depends on Group ID. Suppose you have a topic with 12 partitions. If you have 2 Kafka consumers with the same Group Id, they will both read 6

Kafka consumer - what's the relation of consumer processes and threads with topic partitions

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-20 11:31:50
问题 I have been working with Kafka lately and have bit of confusion regarding the consumers under a consumer group. The center of the confusion is whether to implement consumers as processes or threads. For this question, assume I am using the high level consumer. Let's consider a scenario that I have experimented with. In my topic there are 2 partitions (for simplicity let's assume replication factor is just 1). I created a consumer ( ConsumerConnector ) process consumer1 with group group1 ,

Kafka consumer - what's the relation of consumer processes and threads with topic partitions

混江龙づ霸主 提交于 2019-12-20 11:31:14
问题 I have been working with Kafka lately and have bit of confusion regarding the consumers under a consumer group. The center of the confusion is whether to implement consumers as processes or threads. For this question, assume I am using the high level consumer. Let's consider a scenario that I have experimented with. In my topic there are 2 partitions (for simplicity let's assume replication factor is just 1). I created a consumer ( ConsumerConnector ) process consumer1 with group group1 ,

How to fix not receiving kafka messages in python but receiving the same messages in shell?

Deadly 提交于 2019-12-20 05:50:52
问题 I want to consume messages coming in a kafka topic. I am using debezium which oplogs the mongodb changes and puts them in the kafka queue. I am able to connect to kafka using my python code, list the kafka topics. Although, when I want to consume the messages, its all blank whereas the same topic when consumed from the shell gives messages, performs perfectly. from kafka import KafkaConsumer topic = "dbserver1.inventory.customers" # consumer = KafkaConsumer(topic, bootstrap_servers='localhost

Kafka : Alter number of partitions for a specific topic using java

限于喜欢 提交于 2019-12-20 05:16:24
问题 I am new to Kafka and working with new KafkaProducer and KafkaConsumer, version : 0.9.0.1 Is there any way in java to alter/update the number of partitions for a specific topic after it has been created. I am not using zookeeper to create topic. My KafkaProducer is automatically creating topics when publish request arrives. I can also provide more details if these are not enough 回答1: Yes, it's possible. You have to access the AdminUtils scala class in kafka_2.11-0.9.0.1.jar to add partitions.

Kafka integration in unity3d throwing Win32Exception error

谁都会走 提交于 2019-12-20 01:58:24
问题 I am trying to run a code sample of Kafka in unity environment and for this reason, I created a consumer client (Code given below). using System.Collections; using System.Collections.Generic; using UnityEngine; using Confluent.Kafka; using Confluent.Kafka.Serialization; using System.Text; public class KafkaConsumer : MonoBehaviour { // Use this for initialization void Start () { /* * The consumer application will then pick the messages from the same topic and write them to console output. *

Is kafka consumer 0.9 backward compatible?

谁都会走 提交于 2019-12-19 16:34:10
问题 Is the upcoming kafka consumer 0.9.x going to be compatible with 0.8 broker? In other words - it is possible to only switch to new consumer implementation, without touching anything else? 回答1: According to the documentation of Kafka 0.9.0, you can not use the new consumer for reading data from 0.8.x brokers. The reason is the following: 0.9.0.0 has an inter-broker protocol change from previous versions. 回答2: No. In general it's recommended to upgrade brokers before clients since brokers

kafka.consumer.SimpleConsumer: Reconnect due to socket error: java.nio.channels.ClosedChannelException

百般思念 提交于 2019-12-19 12:49:10
问题 I am running a simple consumer for kafka such as this: int timeout = 80000; int bufferSize = 64*1024; consumer = new SimpleConsumer(host, port,timeout, bufferSize, clientName); This runs fine for a couple of hours but I get an exception later on kafka.consumer.SimpleConsumer: Reconnect due to socket error: java.nio.channels.ClosedChannelException and consumer stops ... has anyone faced this problem before ? 回答1: A slightly different question, but perhaps with the same root cause and solution

Getting the last message sent to a kafka topic

醉酒当歌 提交于 2019-12-19 09:28:31
问题 I'm new to Kafka and working on a prototype to connect a proprietary streaming service into Kafka. I'm looking to get the key of the last message sent on a topic as our in-house stream consumer needs to logon with the ID of the last message it received when connecting. Is it possible, using either the KafkaProducer or a KafkaConsumer to do this? I've attempted to do the following using a Consumer, but when also running the console consumer I see messages replayed. // Poll so we know we're

Kafka __consumer_offsets growing in size

柔情痞子 提交于 2019-12-19 05:07:20
问题 We are using Kafka as a Strictly Ordered Queue and hence a single topic/single partition/single consumer group combo is in use. I should be able to use multiple partition later in future. My consumer is spring-boot app listener, that produces and consumes from the same topic(s). So the consumer group is fixed and there is always a single consumer. Kafka version 0.10.1.1 In such scenario the Log file for topic-0 and a few __consumer_offsets_XX grows. In fact __consumer_offsets_XX grows very