kafka-consumer-api

Can multiple Kafka consumers read same message from the partition

流过昼夜 提交于 2019-12-03 04:27:12
We are planning to write a Kafka consumer(java) which reads Kafka queue to perform an action which is in the message. As the consumers run independently, will the message is processed by only one consumer at a time? Else all the consumers process the same message as they have their own offset in the partition. Please help me understand. It depends on Group ID . Suppose you have a topic with 12 partitions. If you have 2 Kafka consumers with the same Group Id, they will both read 6 partitions, meaning they will read different set of partitions = different set of messages. If you have 4 Kafka

How to create topics in apache kafka?

孤街醉人 提交于 2019-12-03 04:19:15
问题 What is the bestway to create topics in kafka? How many replicas/partitions to be defined when we create topics? In the new producer API, when i try to publish a message to a non existing topic , it first time fails and then successfully publishing. I would like to know, the relationships between replica, partitions and the number of cluster nodes. Do we need to create topic prior to publish messages? 回答1: When you are starting your Kafka broker you can define set of properties in conf/server

Kafka console consumer ERROR “Offset commit failed on partition”

对着背影说爱祢 提交于 2019-12-03 02:27:41
I am using a kafka-console-consumer to probe a kafka topic. Intermittently, I am getting this error message, followed by 2 warnings: [2018-05-01 18:14:38,888] ERROR [Consumer clientId=consumer-1, groupId=console-consumer-56648] Offset commit failed on partition my-topic-0 at offset 444: The coordinator is not aware of this member. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator) [2018-05-01 18:14:38,888] WARN [Consumer clientId=consumer-1, groupId=console-consumer-56648] Asynchronous auto-commit of offsets {my-topic-0=OffsetAndMetadata{offset=444, metadata=''}} failed: Commit

Kafka consumer offset max value?

大憨熊 提交于 2019-12-03 02:15:25
I was googling and reading Kafka documentation but I couldn't find out the max value of a consumer offset and whether there is offset wraparound after max value. I understand offset is an Int64 value so max value is 0xFFFFFFFFFFFFFFFF. If there is wraparound, how does Kafka handle this situation? According to this post , the offset is not reset: We don't roll back offset at this moment. Since the offset is a long, it can last for a really long time. If you write 1TB a day, you can keep going for about 4 million days. Plus, you can always use more partitions (each partition has its own offset).

What is the difference between kafka earliest and latest offset values

烂漫一生 提交于 2019-12-03 01:55:00
producer sends messages 1, 2, 3, 4 consumer receives messages 1, 2, 3, 4 consumer crashes/disconnects producer sends messages 5, 6, 7 consumer comes back up and should receive messages starting from 5 instead of 7 For this kind of result, which offset value I have to use and what are the other changes/configurations need to do When a consumer joins a consumer group it will fetch the last committed offset so it will restart to read from 5, 6, 7 if before crashing it committed the latest offset (so 4). The earliest and latest values for the auto.offset.reset property is used when a consumer

Error UNKNOWN_MEMBER_ID occurred while committing offsets for group xxx

牧云@^-^@ 提交于 2019-12-03 01:02:52
With Kafka client Java library, consuming logs has worked for some time but with the following errors it doesn't work any more: 2016-07-15 19:37:54.609 INFO 4342 --- [main] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator 2147483647 dead. 2016-07-15 19:37:54.933 ERROR 4342 --- [main] o.a.k.c.c.internals.ConsumerCoordinator : Error UNKNOWN_MEMBER_ID occurred while committing offsets for group logstash 2016-07-15 19:37:54.933 WARN 4342 --- [main] o.a.k.c.c.internals.ConsumerCoordinator : Auto offset commit failed: Commit cannot be completed due to group rebalance 2016-07-15 19

What is the difference in Kafka between a Consumer Group Coordinator and a Consumer Group Leader?

怎甘沉沦 提交于 2019-12-02 23:16:18
I see references to Kafka Consumer Group Coordinators and Consumer Group Leaders... What is the difference? What is the benefit from separating group management into two different sets of responsibilities? Yogesh Gupta The consumer group coordinator is one of the brokers while the group leader is one of the consumer in a consumer group. The group coordinator is nothing but one of the brokers which receives heartbeats (or polling for messages) from all consumers of a consumer group. Every consumer group has a group coordinator. If a consumer stops sending heartbeats, the coordinator will

How does Kafka store offsets for each topic?

≯℡__Kan透↙ 提交于 2019-12-02 19:23:57
While polling Kafka, I have subscribed to multiple topics using the subscribe() function. Now, I want to set the offset from which I want to read from each topic, without resubscribing after every seek() and poll() from a topic. Will calling seek() iteratively over each of the topic names, before polling for data achieve the result? How are the offsets exactly stored in Kafka? I have one partition per topic and just one consumer to read from all topics. GuangshengZuo How does Kafka store offsets for each topic? Kafka has moved the offset storage from zookeeper to kafka brokers. The reason is

Kafka 0.10 Java consumer not reading message from topic

此生再无相见时 提交于 2019-12-02 17:07:49
问题 I have a simple java producer like below public class Producer { private final static String TOPIC = "my-example-topi8"; private final static String BOOTSTRAP_SERVERS = "localhost:8092"; public static void main( String[] args ) throws Exception { Producer<String, byte[]> producer = createProducer(); for(int i=0;i<3000;i++) { String msg = "Test Message-" + i; final ProducerRecord<String, byte[]> record = new ProducerRecord<String, byte[]>(TOPIC, "key" + i, msg.getBytes()); producer.send(record

JBOSS gives org.apache.kafka.common.KafkaException: auth.conf cannot be read

一个人想着一个人 提交于 2019-12-02 16:35:14
问题 When I deploy war of my simple kafka project (which works fine as a jar ) in wildfly v 10 , i get some zookeeper connection exception[1].This occurs when kafka listener starts to connect with zookeeper [1]] 15:21:58,531 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 82) MSC000001: Failed to start service jboss.deployment.unit."ratha.war".component.KafkaServiceBean.START: org.jboss.msc.service.StartException in service jboss.deployment.unit."ratha.war".component