kafka-consumer-api

Don't print the kafka-console-consumer warnings

ⅰ亾dé卋堺 提交于 2019-12-12 08:15:00
问题 I am trying to debug encrypted messages onto a Kafka cluster. Obviously these messages are full of non printable characters and are not usable on a console, so I wanted to save the output in a file like this: kafka-console-consumer \ --zookeeper 0.zookeeper.local,1.zookeeper.local \ --max-messages 1 \ --topic MYTOPIC > /tmp/message I am unable to decrypt the resulting message, because the output contains, along with the ciphertext, warning messages such as: [2016-02-24 11:52:47,488] WARN

Why is meta data added to the output of this Kafka connector?

自闭症网瘾萝莉.ら 提交于 2019-12-12 04:29:01
问题 I have a Kafka connector with the following code for the poll() method in the SourceTask implementation. @Override public List<SourceRecord> poll() throws InterruptedException { SomeType item = mQueue.take(); List<SourceRecord> records = new ArrayList<>(); SourceRecord[] sourceRecords = new SourceRecord[]{ new SourceRecord(null, null, "data", null, Schema.STRING_SCHEMA, "foo", Schema.STRING_SCHEMA, "bar") }; Collections.addAll(records, sourceRecords); return records; } If I attach a consumer

Kafka consumer api failed to subscribe to topic

喜欢而已 提交于 2019-12-12 04:23:34
问题 I am using simple Kafka client API. As far as I know there are two ways to consumer messages, subscribe to a topic and assign partition to consumer. However the first method does not work. Consumer poll() would hang forever. It only works with assign . // common config for consumer Map<String, Object> config = new HashMap<>(); config.put("bootstrap.servers", bootstrap); config.put("group.id", KafkaTestConstants.KAFKA_GROUP); config.put("enable.auto.commit", "true"); config.put("auto.offset

Kafka connect tutorial stopped working

末鹿安然 提交于 2019-12-12 03:37:24
问题 I was following step #7 (Use Kafka Connect to import/export data) at this link: http://kafka.apache.org/documentation.html#quickstart It was working well until I deleted the 'test.txt' file. Mainly because that's how log4j files would work. After certain time, the file will get rotated - I mean - it will be renamed & a new file with the same name will start getting written to. But after, I deleted 'test.txt', the connector stopped working. I restarted connector, broker, zookeeper etc, but the

Kafka consumer offsetForTimes method returns only few partitions offsets position not all

穿精又带淫゛_ 提交于 2019-12-12 01:06:55
问题 I've one kafka topic with 8 partitions, subscribing the topic from single consumer and I've unique consumer group for the consumer. Now I tried to consume only the recent messages (in my case 3 mins before from current time) from all partitions. I used offsetForTimes method like below. List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic); List<TopicPartition> topicPartions = partitionInfos.stream().......collect(Collectors.toList()); Long value = Instant.now().minus(120

Invalid Keystore Format, BootStrap Broker Disconnected

送分小仙女□ 提交于 2019-12-11 23:42:02
问题 I am trying to develop a Kafka Consumer in Spring Boot. I am able to setup the Kafka Cluster in Kafka Tool and able to read the messages from it manually. I am using the same configs in Spring Boot as well but ended up with the below errors and this warning. 2019-06-10 13:45:40.036 WARN 8364 --- [ id3-0-C-1] org.apache.kafka.clients.NetworkClient : Bootstrap broker XXXXXX.DEVHADOOP.XXXX.COM:6768 disconnected 2019-06-10 13:45:40.038 WARN 8364 --- [ id1-0-C-1] org.apache.kafka.clients

Kafka Consumer - receiving messages Inconsistently

不羁的心 提交于 2019-12-11 22:09:44
问题 I can send and receive messages on command line against a Kafka location installation. I also can send messages through a Java code. And those messages are showing up in a Kafka command prompt. I also have a Java code for the Kafka consumer. The code received message yesterday. It doesn't receive any messages this morning, however. The code has not been changed. I am wondering whether the property configuration isn't quite right nor not. Here is my configuration: The Producer: bootstrap

Distributing data socket among kafka cluster nodes

≯℡__Kan透↙ 提交于 2019-12-11 19:17:31
问题 I want to get data from socket and put it to kafka topic that my flink program can read data from topic and process it. I can do that on one node. But I want to have a kafka cluster with at least three different nodes(different IP address) and poll data from socket to distribute it among nodes.I do not know how to do this and change this code. My simple program is in following: public class WordCount { public static void main(String[] args) throws Exception { kafka_test objKafka=new kafka

Infinite retries with SeekToCurrentErrorHandler in kafka consumer

随声附和 提交于 2019-12-11 18:04:53
问题 I've configured a kafka consumer with SeekToCurrentErrorHandler in Spring boot application using spring-kafka. My consumer configuration is : @Bean public ConsumerFactory<String, String> consumerFactory() { Map<String, Object> props = new HashMap<>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafkaserver"); props.put(ConsumerConfig.GROUP_ID_CONFIG, "group-id"); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,

Kafka consumer is very slow to consume data and only consuming first 500 records

寵の児 提交于 2019-12-11 17:56:13
问题 I am trying to integrate MongoDB and Storm-Kafka, Kafka Producer produces data from MongoDB but it fails to fetch all records from Consumer side. It only consuming 500-600 records out of 1 million records. There are no errors in log file, topology is still alive but not processing further records. Kafka version :0.10.* Storm version :1.2.1 Do i need to add any configs in Consumer? conf.put(Config.TOPOLOGY_BACKPRESSURE_ENABLE, false); conf.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 2048); conf.put