apache-zookeeper

org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /f

自古美人都是妖i 提交于 2019-12-04 20:18:30
I am working with zookeeper 3.4.6, I'm using acl in order to authenticate with zookeeper server. I have my own implementation ZooKeeperSupport , it's a support for create, remove and verify znode. I am triying to create a znode using acl , but fail throwning InvalidACLException in this part of the code zooKeeperSupport.create("/f", DATA_F); I'm basing this project to do it zookeeper-acl-sample , but I want to use digest auth because use user and password BasicMockZookeeperSecurity public class BasicMockZookeeperSecurity { @Resource (name = "zooKeeperSupportFactory") protected

Solr 5.3 & Zookeeper Security Authentication & Authorization

眉间皱痕 提交于 2019-12-04 19:59:55
There are a few topics and articles on Solr authentication & authorization, but I cannot get it to work (the way I like). I followed these tutorials / information sources: https://cwiki.apache.org/confluence/display/solr/Authentication+and+Authorization+Plugins and https://lucidworks.com/blog/2015/08/17/securing-solr-basic-auth-permission-rules/ Then I created this security.json and I confirmed it is active in Zookeeper: { "authentication":{ "class":"solr.BasicAuthPlugin", "credentials":{ "solr":"...", "admin":"...", "monitor":"...", "data_import":"..."}, "":{"v":8}}, "authorization":{ "class"

When and why does Curator throw ConnectionLossException?

牧云@^-^@ 提交于 2019-12-04 18:27:22
I use Curator 1.2.4 and I keep getting ConnectionLossException when I want to monitor one znode for its children's changes. I then implemented a watcher like this public class CuratorChildWatcherImpl implements CuratorWatcher { private CuratorFramework client; public CuratorChildWatcherImpl(CuratorFramework client) { this.client = client; } @Override public void process(WatchedEvent event) throws Exception { List<String> children=client.getChildren().usingWatcher(this).forPath(event.getPath()); // Do other stuff with the children znode. } } Every 11 seconds the code throws

ZooKeeper keeps getting EndOfStreamException, causing a crash

久未见 提交于 2019-12-04 17:06:29
问题 My Zookeeper is controlling a few different queues for different jobs, by holding the relevant job data in each node until the computer is ready to process. If I stop the overall service, such that no jobs can be started ZooKeeper runs just fine after a restart. However, some of these jobs seem to cause ZooKeeper to crash with the following message in the ZooKeeper log: WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@349] - caught end of stream exception EndOfStreamException:

Servicemix 4, DOSGi, and Zookeeper

孤街浪徒 提交于 2019-12-04 15:48:27
This is cross posted from the fusesource forum and the servicemmix forum . I can't get DOSGi working in FUSE. I'm trying to get CXF's DOSGi 1.1-SNAPSHOT with Zookeeper discovery onto FUSE 4.1.0.2. I'm also using Zookeepr 3.2.1. Everything works perfectly on Felix 2.0.0. I just follow the instructions on the DOSGi Discovery page and then install the Discovery Demo bundles. For DOSGi, I just use the cxf-dosgi-ri-singlebundle-distribution-1.1-SNAPSHOT.jar for DSW and cxf-dosgi-ri-discovery-singlebundle-distribution-1.1-SNAPSHOT.jar for zookeepr discovery. Then when I start the sample bundles with

Zookeeper cluster on AWS

旧巷老猫 提交于 2019-12-04 14:49:10
问题 I am trying to setup a zookeeper cluster on 3 AWS ec2 machines, but continuously getting same error: 2016-10-19 16:30:23,177 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@382] - Cannot open channel to 3 at election address /xxx.31.34.102:3888 java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl

Zookeeper not starting, nohup error

与世无争的帅哥 提交于 2019-12-04 11:49:34
I have Downloaded zookeeper-3.4.5.tar.gz and after extracting it I wrote conf/zoo.cfg as tickTime=2000 dataDir=/var/zookeeper clientPort=2181 Now I tried to start zookeeper by bin/zkServer.sh start it gives mohit@mohit:~/zookeeper-3.4.5/bin$ sudo sh zkServer.sh start [sudo] password for mohit: JMX enabled by default Using config: /home/mohit/zookeeper-3.4.5/bin/../conf/zoo.cfg Starting zookeeper ... STARTED But $ echo ruok | nc localhost 2181 is not giving any output. I checked zookeeper.out, it gives mohit@mohit:~/zookeeper-3.4.5/bin$ cat zookeeper.out nohup: failed to run command ‘java’: No

HBase on Hortonworks HDP Sandbox: Can't get master address from ZooKeeper

北城余情 提交于 2019-12-04 11:24:01
I downloaded HDP 2.1 from hortonworks for virtualbox. I got the following error when using Hbase shell in case simple command: create 't1', {NAME=> 'f1', VERSIONS => 5} Hortonworks “ERROR: Can't get master address from ZooKeeper; znode data == null” What do I need to do to get hbase working in this sandbox environment? In hortonwork sandbox you have to manually start hbase. Try to run the following command (as root user), su hbase - -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master; sleep 20" su hbase - -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf

Most efficient way to create a path in zookeeper where root elements of the path may or may not exist?

荒凉一梦 提交于 2019-12-04 10:36:34
问题 Imagine a path "/root/child1/child2/child3" Imagine in zookeeper that maybe a part of this exists, say "/root/child1" There is no equivalent of "mkdir -p" in zookeeper; Also, ZooKeeper.multi() will fail if any one operation fails, so a "make path" couldn't really be baked into a multi call. Additionally, you could have some other client trying to make the same path... This is what I have come up with for creating a path. I wonder if it is even worth checking to see if a part exists or not, to

List all kafka topics

白昼怎懂夜的黑 提交于 2019-12-04 08:47:25
问题 I'm using kafka 0.10 without zookeeper . I want to get kafka topics list. This command is not working since we're not using zookeeper: bin/kafka-topics.sh --list --zookeeper localhost:2181 . How can I get the same output without zookeeper? 回答1: Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don't already have one. If you do not want to install and have a separate zookeeper server, you can use the convenience script packaged with kafka to get a quick-and-dirty single