Leader Not Available Kafka in Console Producer

前端 未结 24 2528
野趣味
野趣味 2020-12-07 07:47

I am trying to use Kafka.
All configurations are done properly but when I try to produce message from console I keep getting the following error

WARN Err         


        
相关标签:
24条回答
  • 2020-12-07 08:09

    i know this was posted long time back, i would like to share how i solved it.
    since i have my office laptop ( VPN and proxy was configured ).
    i checked the environment variable NO_PROXY

    > echo %NO_PROXY%
    

    it returned with empty values
    now i have set the NO_PROXY with localhost and 127.0.0.1

    > set NO_PROXY=127.0.0.1,localhost  
    

    if you want to append to existing values, then

    > set NO_PROXY=%NO_PROXY%,127.0.0.1,localhost  
    

    after this , i have restarted zookeeper and kafka
    worked like a charm

    0 讨论(0)
  • 2020-12-07 08:10

    What solved it for me is to set listeners like so:

    advertised.listeners = PLAINTEXT://my.public.ip:9092
    listeners = PLAINTEXT://0.0.0.0:9092
    

    This makes KAFKA broker listen to all interfaces.

    0 讨论(0)
  • 2020-12-07 08:10

    For anyone trying to run kafka on kubernetes and running into this error, this is what finally solved it for me:

    You have to either:

    1. Add hostname to the pod spec, that way kafka can find itself.

    or

    1. If using hostPort, then you need hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet

    The reason for this is because Kafka needs to talk to itself, and it decides to use the 'advertised' listener/hostname to find itself, rather than using localhost. Even if you have a Service that points the advertised host name at the pod, it is not visible from within the pod. I do not really know why that is the case, but at least there is a workaround.

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: zookeeper-cluster1
      namespace: default
      labels:
        app: zookeeper-cluster1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: zookeeper-cluster1
      template:
        metadata:
          labels:
            name: zookeeper-cluster1
            app: zookeeper-cluster1
        spec:
          hostname: zookeeper-cluster1
          containers:
          - name: zookeeper-cluster1
            image: wurstmeister/zookeeper:latest
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper-cluster1
      namespace: default
      labels:
        app: zookeeper-cluster1
    spec:
      type: NodePort
      selector:
        app: zookeeper-cluster1
      ports:
      - name: zookeeper-cluster1
        protocol: TCP
        port: 2181
        targetPort: 2181
      - name: zookeeper-follower-cluster1
        protocol: TCP
        port: 2888
        targetPort: 2888
      - name: zookeeper-leader-cluster1
        protocol: TCP
        port: 3888
        targetPort: 3888
    
    ---
    
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: kafka-cluster
      namespace: default
      labels:
        app: kafka-cluster
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: kafka-cluster
      template:
        metadata:
          labels:
            name: kafka-cluster
            app: kafka-cluster
        spec:
          hostname: kafka-cluster
          containers:
          - name: kafka-cluster
            image: wurstmeister/kafka:latest
            imagePullPolicy: IfNotPresent
            env:
            - name: KAFKA_ADVERTISED_LISTENERS
              value: PLAINTEXT://kafka-cluster:9092
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-cluster1:2181
            ports:
            - containerPort: 9092
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: kafka-cluster
      namespace: default
      labels:
        app: kafka-cluster
    spec:
      type: NodePort
      selector:
        app: kafka-cluster
      ports:
      - name: kafka-cluster
        protocol: TCP
        port: 9092
        targetPort: 9092
    
    0 讨论(0)
  • 2020-12-07 08:10

    For me, I didn't specify broker id for Kafka instance. It will get a new id from zookeeper sometimes when it restarts in Docker environment. If your broker id is greater than 1000, just specify the environment variable KAFKA_BROKER_ID.

    Use this to see brokers, topics and partitions.

    brew install kafkacat
    kafkacat -b [kafka_ip]:[kafka_poot] -L
    
    0 讨论(0)
  • 2020-12-07 08:12

    Another possibility for this warning (in 0.10.2.1) is that you try to poll on a topic that has just been created and the leader for this topic-partition is not yet available, you are in the middle of a leadership election.

    Waiting a second between topic creation and polling is a workaround.

    0 讨论(0)
  • 2020-12-07 08:13

    It could be related to advertised.host.name setting in your server.properties.

    What could happen is that your producer is trying to find out who is the leader for a given partition, figures out its advertised.host.name and advertised.port and tries to connect. If these settings are not configured correctly it then may think that the leader is unavailable.

    0 讨论(0)
提交回复
热议问题