Kafka server configuration - listeners vs. advertised.listeners

拜拜、爱过 提交于 2019-11-26 19:09:07

问题


To get Kafka running, you need to set some properties in config/server.properties file. There are two settings I don't understand.

Can somebody explain the difference between listeners and advertised.listeners property?

The documentation says:

listeners: The address the socket server listens on.

and

advertised.listeners: Hostname and port the broker will advertise to producers and consumers.

When do I have to use which setting?


回答1:


Since I cannot comment yet I will post this as an "answer", adding on to M.Situations answer.

Within the same document he links there is this blurb about which listener is used by a KAFKA client (https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic):

As stated previously, clients never see listener names and will make metadata requests exactly as before. The difference is that the list of endpoints they get back is restricted to the listener name of the endpoint where they made the request.

This is important as depending on what URL you use in your bootstrap.servers config that will be the URL* that the client will get back if it is mapped in advertised.listeners (do not know what the behavior is if the listener does not exist).

Also note this:

The exception is ZooKeeper-based consumers. These consumers retrieve the broker registration information directly from ZooKeeper and will choose the first listener with PLAINTEXT as the security protocol (the only security protocol they support).

As an example broker config (for all brokers in cluster):

advertised.listeners=EXTERNAL://XXXXX.compute-1.amazonaws.com:9990,INTERNAL://ip-XXXXX.ec2.internal:9993

inter.broker.listener.name=INTERNAL

listener.security.protocol.map=EXTERNAL:SSL,INTERNAL:PLAINTEXT

If the client uses XXXXX.compute-1.amazonaws.com:9990 to connect, the metadata fetch will go to that broker. However, the returning URL to use with the Group Coordinator or Leader could be 123.compute-1.amazonaws.com:9990* (a different machine!). This means that the match is done on the listener name as advertised by KIP-103 irrespective of the actual URL (node).

Since the protocol map for EXTERNAL is SSL this would force you to use an SSL keystore to connect.

If on the other hand you are within AWS lets say, you can then issue ip-XXXXX.ec2.internal:9993 and the corresponding connection would be plaintext as per the protocol map.

This is especially needed in IaaS where in my case brokers and consumers live on AWS, whereas my producer lives on a client site, thus needing different security protocols and listeners.

EDIT: Also adding Inbound Rules is much easier now that you have different ports for different clients (brokers, producers, consumers).




回答2:


listeners is what the broker will use to create server sockets.

advertised.listeners is what clients will use to connect to the brokers.

The two settings can be different if you have a "complex" network setup (with things like public and private subnets and routing in between).




回答3:


From this link: https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic

During the 0.9.0.0 release cycle, support for multiple listeners per broker was introduced. Each listener is associated with a security protocol, ip/host and port. When combined with the advertised listeners mechanism, there is a fair amount of flexibility with one limitation: at most one listener per security protocol in each of the two configs (listeners and advertised.listeners).

In some environments, one may want to differentiate between external clients, internal clients and replication traffic independently of the security protocol for cost, performance and security reasons. A few examples that illustrate this:

  • Replication traffic is assigned to a separate network interface so that it does not interfere with client traffic.
  • External traffic goes through a proxy/load-balancer (security, flexibility) while internal traffic hits the brokers directly (performance, cost).
  • Different security settings for external versus internal traffic even though the security protocol is the same (e.g. different set of enabled SASL mechanisms, authentication servers, different keystores, etc.)

As such, we propose that Kafka brokers should be able to define multiple listeners for the same security protocol for binding (i.e. listeners) and sharing (i.e. advertised.listeners) so that internal, external and replication traffic can be separated if required.

So,

listeners - Comma-separated list of URIs we will listen on and their protocols. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists:

  • PLAINTEXT://myhost:9092,TRACE://:9091
  • PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093

advertised.listeners - Listeners to publish to ZooKeeper for clients to use, if different than the listeners above. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used.



来源:https://stackoverflow.com/questions/42998859/kafka-server-configuration-listeners-vs-advertised-listeners

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!