Why does Kafka Consumer keep receiving the same messages (offset)

守給你的承諾、 提交于 2020-01-04 06:09:46

问题


I have a SOAP Web Service that sends a kafka request message and waits for a kafka response message (e.g. consumer.poll(10000)).

Each time the web service is called it creates a new Kafka Producer and a new Kafka Consumer.

every time I call the web service the consumer receives the same messages (e.g. messages with the same offset).

I am using Kafka 0.9 and have auto commit enabled and a auto commit frequency of 100 ms.

for each ConsumerRecord returned by the poll() method I process within its own Callable, e.g.

ConsumerRecords<String, String> records = consumer.poll(200);

for (ConsumerRecord<String, String> record : records) {

final Handler handler = new Handler(consumerRecord);
            executor.submit(handler);

}

Why do I keep receiving the same messages over and over again?

UPDATE 0001

metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class com.kafka.MDCDeserializer
group.id = group-A.group
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [machine1:6667, machine2:6667, machine3:6667, machine0:6667]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = true
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id = 
ssl.endpoint.identification.algorithm = null
key.deserializer = class com.kafka.UUIDDerializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXTSASL
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = IbmX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1024
send.buffer.bytes = 131072
auto.offset.reset = latest

回答1:


Base on the code that you are showing. I think that your problem is that the new Consumer is single threaded. If you poll once and then don't do another poll the auto.commit.offset is not going to work. Try putting your code in a while loop and see if when you call poll again the offset is committed.



来源:https://stackoverflow.com/questions/36007141/why-does-kafka-consumer-keep-receiving-the-same-messages-offset

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!