Why won't my Java consumer read the data that I have created?

喜你入骨 提交于 2019-12-13 03:55:39

问题


I am trying to read data from a simple producer that I have made. For some reason whenever I run the consumer, it does not see/produce any of the data I have produced. Can anyone possibly give me any guidance on what to do next?

I have included code of my producer and consumer below:

Producer:

public class AvroProducer {

public static void main(String[] args) {

    String bootstrapServers = "localhost:9092";
    String topic = "trackingReportsReceived";

    //create Producer properties
    Properties properties = new Properties();
    properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
    properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class.getName());
    properties.setProperty("schema.registry.url", "http://localhost:8081");

    //create the producer
    KafkaProducer<String, trackingReport> producer = new KafkaProducer<>(properties);

    //creating my own event
    trackingReport event = trackingReport.newBuilder()
            .setRemoteEventUID(2)
            .setReceivedPacketUID(2)
            .setRemoteUnitAID(2)
            .setEventTime(2)
            .setEventLocationStampUID(3)
            .setEventLatitude(2)
            .setEventLongitude(2)
            .setEventOdometer(3)
            .setEventSpeed(3)
            .setEventCourse(3)
            .build();


    //create a producer record
    ProducerRecord<String, trackingReport> eventRecord = new ProducerRecord<>(topic, event);

    //send data - asynchronous
    producer.send(eventRecord, new Callback() {
        @Override
        public void onCompletion(RecordMetadata recordMetadata, Exception e) {
            if (e == null) {
                    System.out.println("Success!");
                    System.out.println(recordMetadata.toString());
                } else {
                    e.printStackTrace();
                }
            }
        });


    //flush data
    producer.flush();
    //flush and close producer
    producer.close();

Consumer:

public class AvroConsumer {

public static void main(String[] args) {

    final Logger logger = LoggerFactory.getLogger(AvroConsumer.class);

    String bootstrapServers = "localhost:9092";

    //create Consumer properties
    Properties properties = new Properties();
    properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    properties.put(ConsumerConfig.GROUP_ID_CONFIG, "consumer");
    properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
    properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

//        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
    properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
    properties.put("schema.registry.url", "http://localhost:8081");
    properties.put("specific.avro.reader", "true");

    //create the consumer
    KafkaConsumer<String, trackingReport> consumer = new KafkaConsumer<>(properties);
    String topic = "trackingReportsReceived";

    consumer.subscribe(Collections.singletonList(topic));

    System.out.println("Waiting for data...");

//        try {

        while (true) {
            ConsumerRecords<String, trackingReport> records = consumer.poll(100);
            for (ConsumerRecord<String, trackingReport> record : records) {
                trackingReport trackingrep = record.value();
                System.out.println(trackingrep);
            }
            consumer.commitSync();
        }

//        } catch (Exception e) {
//            logger.error("Exception occured while consuming messages...", e);
//        } finally {
//            consumer.close();
//        }


}
}

N.B. The producer works, however the consumer does not.


回答1:


If you run the consumer after it had produced the records, it won't receive them . As , if no offsets have previously been committed for the group, the consumer starts at the end topics.

May be in your kafka-console-consumer.sh, you have the --from-beginning flag which forces the consumer to start reading from the beginning of the topic.

You can explicitly use seekToBeginning() when your consumer starts to move its position to the start of topics.




回答2:


With the console consumer script, did you use the same group id as the one in your java consumer?

If you did then to validate, try with a new consumer group in your code.

If it works then it would mean that the console consumer read all data in the topic so the consumer with this group id commit the last current offsets and when you launch the java consumer with the same group id, he tried to read at that offset which is the last.. So no messages to read.

To validate that you could also start the java consumer first and after that start the producer, if you see the messages then it would be that the console and java consumer had the same group id.



来源:https://stackoverflow.com/questions/53781639/why-wont-my-java-consumer-read-the-data-that-i-have-created

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!