kafka-consumer-api

Kafka consumer fetching metadata for topics failed

杀马特。学长 韩版系。学妹 提交于 2019-11-30 12:26:34
问题 I am attempting to write a Java client for a third party's Kafka and ZooKeeper servers. I am able to list and describe topics, but when I attempt to read any, a ClosedChannelException is raised. I reproduce them here with the command line client. $ bin/kafka-console-consumer.sh --zookeeper 255.255.255.255:2181 --topic eventbustopic [2015-06-02 16:23:04,375] WARN Fetching topic metadata with correlation id 0 for topics [Set(eventbustopic)] from broker [id:1,host:SOME_HOST,port:9092] failed

Kafka - Delayed Queue implementation using high level consumer

最后都变了- 提交于 2019-11-30 11:57:29
问题 Want to implement a delayed consumer using the high level consumer api main idea: produce messages by key (each msg contains creation timestamp) this makes sure that each partition has ordered messages by produced time. auto.commit.enable=false (will explicitly commit after each message process) consume a message check message timestamp and check if enough time has passed process message (this operation will never fail) commit 1 offset while (it.hasNext()) { val msg = it.next().message() /

Kafka Connect JDBC sink connector not working

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-30 10:16:58
I am trying to use Kafka Connect JDBC sink connector to insert data into Oracle but it is throwing an error . I have tried with all the possible configurations of the schema. Below is the examples . Please suggest if I am missing anything below are my configurations files and errors. Case 1- First Configuration internal.value.converter.schemas.enable=false . so I am getting the [2017-08-28 16:16:26,119] INFO Sink task WorkerSinkTask{id=oracle_sink-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:233) [2017-08-28 16:16:26,606] INFO Discovered coordinator dfw

Is key required as part of sending messages to Kafka?

耗尽温柔 提交于 2019-11-30 10:08:19
问题 KeyedMessage<String, byte[]> keyedMessage = new KeyedMessage<String, byte[]>(request.getRequestTopicName(), SerializationUtils.serialize(message)); producer.send(keyedMessage); Currently, I am sending messages without any key as part of keyed messages, will it still work with delete.retention.ms ? Do I need to send a key as part of the message? Is this good to make key as part of the message? 回答1: Keys are mostly useful/necessary if you require strong order for a key and are developing

Why does a Kafka consumer take a long time to start consuming?

喜欢而已 提交于 2019-11-30 09:10:51
We start a Kafka consumer, listening on a topic which may not yet be created (topic auto creation is enabled though). Not long thereafter a producer is publishing messages on that topic. However, it takes some time for the consumer to notice this: 5 minutes to be exact. At this point the consumer revokes its partitions and rejoins the consumer group. Kafka re-stabilizes the group. Looking at the time-stamps of the consumer vs. kafka logs, this process is initiated at the consumer side. I suppose this is expected behavior but I would like to understand this. Is this actually a re-balancing

Does Kafka support request response messaging

做~自己de王妃 提交于 2019-11-30 08:11:12
I am investigating Kafka 9 as a hobby project and completed a few "Hello World" type examples. I have got to thinking about Real World Kafka applications based on request response messaging in general and more specifically how to link a Kafka request message to its response message. I was thinking along the lines of using a generated UUID as the request message key and employ this request UUID as the associated response message key. Much the same type of mechanism that WebSphere MQ has message correlation id. My end 2 end process would be. 1). Kafka client generates a random UUID and sends a

difference between groupid and consumerid in Kafka consumer

我怕爱的太早我们不能终老 提交于 2019-11-30 06:48:43
I am new to Kafka. I noticed in Consumer configuration that has two ids. one is group.id ( mandatory ) and second one is consumer.id ( non Mandatory ). Please tell why 2 Ids and difference. Consumers groups is a Kafka abstraction that enables supporting both point-to-point and publish/subscribe messaging. A consumer can join a consumer group (let us say group_1 ) by setting its group.id to group_1 . Consumer groups is also a way of supporting parallel consumption of the data i.e. different consumers of the same consumer group consume data in parallel from different partitions. In addition to

Kafka bootstrap-servers vs zookeeper in kafka-console-consumer

你离开我真会死。 提交于 2019-11-30 06:47:06
问题 I'm trying to test run a single Kafka node with 3 brokers & zookeeper. I wish to test using the console tools. I run the producer as such: kafka-console-producer --broker-list localhost:9092,localhost:9093,localhost:9094 --topic testTopic Then I run the consumer as such: kafka-console-consumer --zookeeper localhost:2181 --topic testTopic --from-beginning And I can enter messages in the producer and see them in the consumer, as expected. However , when I run the updated version of the consumer

kafka consumer to dynamically detect topics added

岁酱吖の 提交于 2019-11-30 05:42:21
问题 I'm using KafkaConsumer to consume messages from Kafka server (topics).. It works fine for topics created before starting Consumer code... But the problem is, it will not work if the topics created dynamically(i mean to say after consumer code started), but the API says it will support dynamic topic creation.. Here is the link for your reference.. Kafka version used : 0.9.0.1 https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html Here is the JAVA

Kafka consumer list

别说谁变了你拦得住时间么 提交于 2019-11-30 05:40:34
I need to find out a way to ask Kafka for a list of topics. I know I can do that using the kafka-topics.sh script included in the bin\ directory. Once I have this list, I need all the consumers per topic. I could not find a script in that directory, nor a class in the kafka-consumer-api library that allows me to do it. The reason behind this is that I need to figure out the difference between the topic's offset and the consumers' offsets. Is there a way to achieve this? Or do I need to implement this functionality in each of my consumers? Use kafka-consumer-groups.sh For example bin/kafka