kafka-consumer-api

Kafka batch listener incorrect deserializing message

断了今生、忘了曾经 提交于 2019-12-30 11:18:18
问题 I am using batch listening with following configuration but my message wrongly deserialise : @KafkaListener( id = "${kafka.buyers.product-sales-pricing.id}", topics = "${kafka.buyers.product-sales-pricing.topic}", groupId = "${kafka.buyers.group-id}", concurrency = "${kafka.buyers.concurrency}" ) public void listen( @Payload List<String> messages, @Header( KafkaHeaders.RECEIVED_PARTITION_ID ) List<Integer> partitions, @Header( KafkaHeaders.OFFSET ) List<Long> offsets ) throws IOException {}

How to make kafka consumer to read from last consumed offset but not from beginning

北战南征 提交于 2019-12-30 06:13:29
问题 I am new to kafka and trying to understand if there is a way to read messages from last consumed offset, but not from beginning. I am writing an example case, so that my intention will not get deviate. Eg: 1) I produced 5 messages at 7:00 PM and console consumer consumed those. 2) I stopped consumer at 7:10 PM 3) I produced 10 message at 7:20 PM. No consumer had read those messages. 4) Now, i have started console consumer at 7:30 PM, without from-beginning. 5) Now, it Will read the messages

Kafka consumer for multiple topic

六月ゝ 毕业季﹏ 提交于 2019-12-30 03:55:28
问题 I have a list of topics (for now it's 10) whose size can increase in future. I know we can spawn multiple threads (per topic) to consume from each topic, but in my case if the number of topics increases, then the number of threads consuming from the topics increases, which I do not want, since the topics are not going to get data too frequently, so the threads will sit ideal. Is there any way to have a single consumer to consume from all topics? If yes, then how can we achieve it? Also how

Kafka 0.9.0.1 Java Consumer stuck in awaitMetadataUpdate()

≯℡__Kan透↙ 提交于 2019-12-29 09:35:12
问题 I'm trying to get a simple Kafka Consumer to work using the Java API v0.9.0.1. The kafka server I'm using is a docker container, also running version 0.9.0.1. Below is the consumer code: public class Consumer { public static void main(String[] args) throws IOException { KafkaConsumer<String, String> consumer; try (InputStream props = Resources.getResource("consumer.props").openStream()) { Properties properties = new Properties(); properties.load(props); consumer = new KafkaConsumer<>

What happens if a Kafka Consumer instance dies?

落爺英雄遲暮 提交于 2019-12-25 15:59:08
问题 Kafka Broker has 3 partitions. Kafka Consumer instance' count is 3. Suddenly, one Consumer instance died. I know that if a Kafka Consumer instance dies, the Kafka Broker is rebalancing and another consumer instance gets allocated to that partition. I wonder if it is correct to assume that another instance consumes all of the partition it originally consumes and then allocates and consumes dead partitions. (And do I have to implement ConsumerRebalanceListener in client code?) If this is the

Kafka java producer and consumer with ACL enabled with topic

♀尐吖头ヾ 提交于 2019-12-25 09:06:39
问题 I'm bit confused with kafka ACL configuration, where we configure authorization for producer and consumer. There are various examples showing producing/consuming message using command line. Do we need any extra configuration to produce/consume messages using JAVA api to/from secured kafka topic. 回答1: @Apollo : This question is quite vague.. If you want to learn ACL/ SSL it will take some time.. the below link might help you to get started. https://github.com/Symantec/kafka-security-0.9 回答2:

How is the kafka offset value computed?

淺唱寂寞╮ 提交于 2019-12-25 08:03:26
问题 Is kafka offeset value unique per partition or per topic (considering same group id)? 回答1: It is unique per partition. start from zero and long data type. 回答2: It is a signed long, unique per partition and is incremented for every messages added to the partition log. 来源: https://stackoverflow.com/questions/40094936/how-is-the-kafka-offset-value-computed

Remotely accessing Kafka running inside kubernetes

末鹿安然 提交于 2019-12-25 01:42:37
问题 I have a single node Kafka broker running inside a pod on a single node kubernetes environment. I am using this image for kafka: https://hub.docker.com/r/wurstmeister/kafka kafka version = 1.1.0 Kubernetes cluster is running inside a VM on a server. The VM has the following IP on the active interface ens32 - 192.168.3.102 Kafka.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: casb-deployment name: kafkaservice spec: replicas: 1 template: metadata: labels: app:

Unable to consume Kafka messages within Spring Boot

强颜欢笑 提交于 2019-12-25 01:35:12
问题 We have a Java application which consumes Kafka messages, using org.apache.kafka.clients.consumer.KafkaConsumer We have created a Spring Boot application with a Spring-Kafka dependency, but are unable to read the messages within the new project. Have checked the obvious parameters, including hostname and port of the bootstrap servers (which the logs show are recognized), the group, the topic and that Spring Boot, like the original consumer, uses StringDeserializer . Here is our configuration

Spring Kafka : Subscribe to a new Topic Pattern during Runtime

孤人 提交于 2019-12-25 01:15:02
问题 I'm using the annotation @KafkaListener to consume topics in my application. I need to change the topic pattern at runtime in an already running consumer so that new topics that match the new pattern can be consumed. I tried the below code, but it still consumes the topics matching the old topic pattern. Here, I have set the "old-topic-pattern" at application start-up. Then, I'm updating the pattern to "new-topic-pattern" every 10 seconds using a Spring @Scheduler. Class