apache-zookeeper

Connecting and Persisting to HBase

耗尽温柔 提交于 2019-12-04 08:38:46
I just tried to connect to hbase which is part of the cloudera-vm using a java client. (192.168.56.102 is the inet ip of the vm) I use virtual box with host only network setting. So I can access the webUI of the hbase master @ http://192.168.56.102:60010/master.jsp Also my java client (worked well on the vm itself) established connection to 192.168.56.102:2181 But when it calls getMaster I get connection refused see log: 11/09/14 11:19:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.56.102:2181 sessionTimeout=180000 watcher=hconnection 11/09/14 11:19:30 INFO

Kafka : How to connect kafka-console-consumer to fetch remote broker topic content?

匆匆过客 提交于 2019-12-04 08:27:30
I have setup a kafka zookeeper and 3 brokers on one machine on ec2 with ports 9092..9094 and am trying to consume the topic content from another machine. The ports 2181 (zk), 9092, 9093 and 9094 (servers) are open to the consumer machine. I can even do a bin/kafka-topics.sh --describe --zookeeper 172.X.X.X:2181 --topic remotetopic which gives me Topic:remotetopic PartitionCount:1 ReplicationFactor:3 Configs: Topic: remotetopic Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1 Blockquote However when i do bin/kafka-console-consumer.sh --zookeeper 172.X.X.X:2181 --from-beginning --topic

Best way to start zookeeper server from java program

感情迁移 提交于 2019-12-04 07:27:26
I have two questions for which I couldn't find any popular/widely accepted solutions: What is the easiest way to start zookeeper server using Java Program? And, is it possible to add servers to zookeeper cluster without having to manually go to each machine and update their config files with new node's id and ip:port entry? Can someone please help? Thanks! igorbel If you want to start a new ZooKeeper server process from your Java code, you would do it the same way you would start any other external process from Java, e.g. using a ProcessBuilder. There is nothing special here in case of

Kafka cached zkVersion not equal to that in zookeeper broker not recovering

情到浓时终转凉″ 提交于 2019-12-04 07:10:13
I have a kafka cluster with 3 brokers. I have started facing issues lately with brokers going out of the cluster and producrs/consumers throwing leader not available errors. On examining the logs I see following sequence of events: //Lots of replica fetcher threads starting/stopping [2017-10-09 14:48:50,600] INFO [ReplicaFetcherManager on broker 6] Removed fetcher for partitions [2017-10-09 14:48:50,608] INFO [ReplicaFetcherThread-0-7], Shutting down (kafka.server.ReplicaFetcherThread) [2017-10-09 14:48:50,918] INFO [ReplicaFetcherThread-0-7], Stopped (kafka.server.ReplicaFetcherThread) [2017

Solr Cloud Document Routing

守給你的承諾、 提交于 2019-12-04 04:25:39
问题 Currently I have a zookeeper multi solr server, single shard setup. Unique ids are generated automatically by solr. I now have a zookeeper mult solr server, multi shard requirement. I need to be able to route updates to a specific shard. After reading http://searchhub.org/2013/06/13/solr-cloud-document-routing/ I am concerned that I cannot allow solr to generate random unique ids if I want to route updates to a specific shard. Cannot anyone confirm this for me and perhaps give an explanation

Zookeeper issue in setting kafka

笑着哭i 提交于 2019-12-04 03:28:55
问题 To install kafka , I downloaded the kafka tar folder. To start the server I tried this command : bin/zookeeper-server-start.sh config/zookeeper.properties The following error occured on entering the above command: INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2014-08-21 11:53:55,748] FATAL Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain) org.apache.zookeeper.server.quorum

SolrCloud ZooKeeper Configuration updates

你。 提交于 2019-12-04 03:10:16
How do I update an existing configuration file of SolrCloud in the Zoo Keeper? I am using Solr4 Beta version with ZooKeeper 3.3.6. I have updated a configuration file, and restarted the Solr Instance which uploads the configuration file to the ZooKeeper. But when I check the configuration file from the SolrCloud Admin console, I don't see the updates. I am not able to understand if this is an issue with SolrCloud admin console or if I am not successful in uploading the config file to ZooKeeper. Can someone who is familiar with ZooKeeper tell me on how to update an existing configuration file

What's the difference between ZooKeeper and any distributed Key-Value stores?

喜欢而已 提交于 2019-12-04 02:37:38
I am new to zookeeper and distributed systems, and am learning it myself. From what I understand for now, it seems that ZooKeeper is simply a key-value store whose keys are paths and values are strings, which is nothing different from, say, Redis. (And apparently we can use slash-separated path as keys in redis as well.) So my question is, what is the essential difference between ZooKeeper and other distributed KV store? Why is ZooKeeper using so called "paths" as keys, instead of simple strings? kuujo You're comparing the high level data model of ZooKeeper to other key value stores, but that

org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry

≯℡__Kan透↙ 提交于 2019-12-04 02:33:19
问题 I am developing Spring Boot + Apache Kafka + Apache Zookeeper example. I've installed/setup Apache Zookeeper and Apache Kafka on my local windows machine. I've taken a reference from link: https://www.tutorialspoint.com/spring_boot/spring_boot_apache_kafka.htm and executed code as is: Setup: https://medium.com/@shaaslam/installing-apache-kafka-on-windows-495f6f2fd3c8 Error: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config

UnknownHostException kafka

删除回忆录丶 提交于 2019-12-04 02:14:33
I am trying to setup a Kafka cluster (the first node in the cluster actually). I have a single node zookeeper cluster setup. I am setting up kafka on a separate node. Both running CentOS 6.4, running IPV6 which is a bit of a PITA. I verified that the machines can talk to each other using netcat. When I startup kafka, I am getting the following exception (which causes kafka to shut down). EDIT: I got kafka starting, I had to set the host.name property in the server.config file. I was able to create a test topic and send messages just fine from the kafka server. However, I get the same error