org.apache.kafka.common.errors.RecordTooLargeException - Droping message with size more than max limit and pushing into another kafka topic

╄→гoц情女王★ 提交于 2020-03-04 18:18:13

问题


org.apache.kafka.common.errors.RecordTooLargeException: There are some messages at [Partition=Offset]: {binlog-0=170421} whose size is larger than the fetch size 1048576 and hence cannot be returned.

Hi, I'm getting the above exception and my apache beam data pipeline fails. I want the kafka reader to ignore message with size more than default size & maybe push it into another topic for logging purposes.

Properties kafkaProps = new Properties();
kafkaProps.setProperty("errors.tolerance", "all");
kafkaProps.setProperty("errors.deadletterqueue.topic.name", "binlogfail");
kafkaProps.setProperty("errors.deadletterqueue.topic.replication.factor", "1");

Tried using the above but still facing record too large exception.

Kafka Connect sink tasks ignore tolerance limits

This link says that the above properties can be used only during conversion or serialization.

Is there some way to solve the problem that I'm facing. Any help would be appreciated.


回答1:


I want the kafka reader to ignore message with size more than default size

With Beam, I'm not sure you can capture that error and skip it. You would have to go to the raw Kafka Consumer/Producer instances to handle that try-catch logic

& maybe push it into another topic for logging purposes.

That isn't possible without changing the broker settings to first allow larger messages, and then changing your client properties.

errors.* properties are for Kafka Connect APIs, not Consumer/Producer (such as Beam)

Related - How can I send large messages with Kafka (over 15MB)?



来源:https://stackoverflow.com/questions/60500751/org-apache-kafka-common-errors-recordtoolargeexception-droping-message-with-si

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!