Kafka Consumers throwing java.lang.OutOfMemoryError: Direct buffer memory

前端 未结 1 1258
北荒
北荒 2021-01-19 18:34

I am using single node Kafka broker (0.10.2) and single node zookeeper broker (3.4.9). I am having a consumer server (single core and 1.5 GB RAM). Whenever

相关标签:
1条回答
  • 2021-01-19 19:18

    Kafka Consumers handles the data backlog by the following two parameters,

    max.poll.interval.ms
    The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.
    Default value is 300000.

    max.poll.records
    The maximum number of records returned in a single call to poll().
    Default value is 500.

    Ignoring to set the above two parameters according to the requirement could lead to polling of maximum data which the consumer may not be able to handle with the available resources, leading to OutOfMemory or failure to commit the consumer offset at times. Hence, it is always advisable to use the max.poll.records and max.poll.interval.ms parameters.

    In your code, the case KafkaTopicConfigEntity.KAFKA_NODE_TYPE_ENUM.Priority.toString() is missing these two parameters which could possibly be the cause of the OutOfMemory problem during polling.

    0 讨论(0)
提交回复
热议问题