apache-zookeeper

Zookeeper/SASL Checksum failed

左心房为你撑大大i 提交于 2019-12-12 19:19:18
问题 How do I fix the problem that generates this error: WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@1040] - Client failed to SASL authenticate: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)] javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)] at com.sun.security.sasl

SolrCloud is detecting non-existing nodes

前提是你 提交于 2019-12-12 04:46:36
问题 I am having an interesting situation with SolrCloud. Basically, I dont know why, but Solr instance, which does not in the cloud normally, is displayed on SolrCloud page and also visible in live_nodes path in Zookepeer. Here are details about the situation: I have one Solr instance, running as a standalone application on a virtual machine, located on a remove machine. We will call it virtual1 from now on. This is the script for running it: java -server -XX:+UnlockExperimentalVMOptions -XX:

Zookeeper running on two nodes

半腔热情 提交于 2019-12-12 04:37:23
问题 I have a situation where Zookeeper is configured for 2 nodes but at times it starts running on both the nodes simultaneously. Why this might be happening? 回答1: To make an ensemble with Master-slave architecture you need to have odd number of zookeeper server .i.e.{1, 3 ,5,7....etc}. Ensemble of 3 can handle the one server crash ..similarly ensemble of 5 can handle 2 server ...so on . When you are trying to create an ensemble of 2 servers ,zookeeper actually cannot understand this an ensemble

Can't create Drill storage plugin for oracle

浪尽此生 提交于 2019-12-12 04:17:29
问题 I want to create storage plugin in drill for oracle jdbc. I copy ojdbc7.jar to apache-drill-1.3.0/jars/3rdparty path and add drill.exec.sys.store.provider.local.path = "/mypath" to dill.override.conf . when I want to create a new storage plugin with below configuration: { "type": "jdbc", "enabled": true, "driver": "oracle.jdbc.OracleDriver", "url":"jdbc:oracle:thin:user/pass@x.x.x.x:1521/orcll" } I get unable to create/update storage error. I am using Redhat 7 & Drill version - 1.3. in

Giraph ZooKeeper port problems

[亡魂溺海] 提交于 2019-12-11 20:55:25
问题 I am trying to run the SimpleShortestPathsVertex (aka SimpleShortestPathComputation) example described in the Giraph Quick Start. I am running this on a Hortonworks Sandbox instance (HDP 2.1) using VirtualBox, and I packaged giraph.jar using profile hadoop_2.0.0. When I try to run the example using hadoop jar giraph.jar org.apache.giraph.GiraphRunner org.apache.giraph.examples.SimpleShortestPathsVertex -vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /user/hue

Solr cloud distributed search on collections

笑着哭i 提交于 2019-12-11 19:53:28
问题 Currently I have a zookeeper instance controlling replication on 3 physical servers. It is the solr integrated zookeeper. 1 shard, 1 collection. I have a new requirement in which I will need a new static solr instance (1 new collection, no replication). Same schema as previous collection. A copy of this instance will also be placed on the 3 physical servers mentioned above. A caveat is that I need to perform distributed searches across the 2 collections and have the results blended. Thanks to

How To create a kafka topic from java for KAFKA-2.1.1-1.2.1.1?

Deadly 提交于 2019-12-11 16:56:59
问题 I am working on java interface which would take user input of topic name, replication and partition to create a kafka topic in KAFKA-2.1.1-1.2.1.1. This is code that I have used from other sources but it seems to be for previous version of kafka import kafka.admin.AdminOperationException; import org.I0Itec.zkclient.ZkClient; import org.I0Itec.zkclient.ZkConnection; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.admin.AdminUtils; import kafka.utils

Testing “fail-over” on Kafka

只谈情不闲聊 提交于 2019-12-11 16:43:01
问题 Set-up 1: OS: Windows 10 ZooKeeper 3 ZooKeeper instances downloaded from Apache (tested with v3.5.6 and v.3.4.14 ): (1) apache-zookeeper-3.5.6-bin_1 (2) apache-zookeeper-3.5.6-bin_2 (Copy of 1) (3) apache-zookeeper-3.5.6-bin_3 (Copy of 1) zoo.cfg: tickTime=2000 initLimit=10 syncLimit=5 dataDir=/tmp/zookeeper_3.4.14_1 clientPort=2181 admin.serverPort=10081 server.1=localhost:2881:3881 server.2=localhost:2882:3882 server.3=localhost:2883:3883 4lw.commands.whitelist=* zoo.cfg: ... dataDir=/tmp

Zookeeper EndOfStreamException happened in Heron Cluster

孤街醉人 提交于 2019-12-11 15:57:40
问题 There is a problem that bothers me. EndOfStreamExceptionalways happen in zookeeper after submitted topologies. Although it does not affect the normal operation of the cluster, I still hope to solve the problem because it may be affect other parts of Heron function. The zookeeper version is 3.4.10 and was deployed on standalonemode in one host of my cluster. The contents of zoo.cfg are as follows. tickTime=10000 initLimit=100 syncLimit=50 dataDir=/home/yitian/zookeeper/data dataLogDir=/home

PubSub Kafka Connect node connection end of file exception

半腔热情 提交于 2019-12-11 15:17:49
问题 While running PubSub Kafka connect using the command: .\bin\windows\connect-standalone.bat .\etc\kafka\WorkerConfig.properties .\etc\kafka\configSink.properties .\etc\kafka\configSource.properties I get this error: Sending metadata request {topics=[test]} to node -1 could not scan file META-INF/MANIFEST.MF in url file:/C:/confluent-3.3.0/bin/../share/java/kafka-serde-tools/commons-compress-1.8.1.jar with scanner SubTypesScanner could not scan file META-INF/MANIFEST.MF in url file:/C: