Kubernetes message consumer scalability

我的梦境 提交于 2019-12-04 18:23:19

The Kubernetes Horizontal Pod Autoscaler has support for custom and external metrics. With more traditional messaging brokers like AMQP (1 queue / many competing consumers) you should be able to easily scale the consumer based on queue depth (such as If queue depth is >= 10000 msg, scale up. If queue depth is <= 1000 msg scale down). You could also do it based on your the average client throughput (such as if average throughput is >= 5000 msg/s, scale up) or average latency. The Horizontal Pod Autoscaler would do the scale up and scale down for you. It will observer the metrics and decide when a pod should be shutdown or started. The consumer application is not aware of this - it doesn't need any special support for this. But you will need to get these metrics and expose them so that Kubernetes can consume them which is currently not completely trivial.

With Kafka, this will be a bit harder since Kafka implements competing consumers very differently from more traditional messaging brokers like AMQP. Kafka topics are split into partitions. And each partition can have only one consumer from a single consumer group. So whatever autoscaling you do, it will not be able to handle situations such as:

  • Small number of partitions for given topic (you will never have more active consumers than the number of partitions)
  • Asymmetric partition load (some partitions being very busy while other are empty)

Kafka also doesn't have anything like queue depth. But you can for example use the information about the consumer lag (which shows how much is the consumer behind the producer for given partition) to do the scaling.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!