apache-zookeeper

Marathon (Mesos) - Stuck in “Loading applications”

北城余情 提交于 2021-02-10 20:40:44
问题 I am building a mesos cluster from scratch (using Vagrant, which is not relevant for this issue). OS: Ubuntu 16.04 (trusty) Setup: Master -> Runs ZooKeeper, Mesos-master, Marathon and Chronos Slave -> Runs Mesos-slave This is my provisioning script for the master node https://github.com/zeitgeist2018/infrastructure/blob/fix-marathon/provision/scripts/install-master.sh. I have managed to register de slave into Mesos, install Marathon and Chronos frameworks, and run scheduled jobs in Chronos

Kafka on Multiple Servers

好久不见. 提交于 2021-02-08 11:41:48
问题 I followed this link to install Kafka + Zookeeper. It all works well, yet I am setting up Kafka + Zookeeper on 2 servers. I have setup the kafka/config/server.properties to have: Server 1: broker.id = 0 Server 1: zookeeper.connect = localhost:2181,99.99.99.91:2181 Server 2: broker.id = 1 Server 2: zookeeper.connect = localhost:2181,99.99.99.92:2181 I am wondering the follwing: When I publish a topic, does it go to both Instances, or just the server it's loaded on? In order to use multiple

Can I recursively create a path in Zookeeper?

你说的曾经没有我的故事 提交于 2021-02-07 18:20:24
问题 I'm pulling ZooKeeper into a project for some concurrency management, and the first thing I tried was something that, to me, was quite obvious (using the zkpython binding): zh = zookeeper.init('localhost:2181') zookeeper.create(zh, '/path/to/a/node', '', [ZOO_OPEN_ACL_UNSAFE]) And I got back a NoNodeException for my trouble. After reflecting on this and reviewing the docs (such as they are), I've been unable to find a way to do the equivalent of a mkdir -p where ZooKeeper will create the

zookeeper.log file not created inside logs directory

元气小坏坏 提交于 2021-02-07 13:40:23
问题 I'm unable to create zookeeper.log under the directory i had specified in the log4j.properties. I'm not sure what's going wrong, Can someone please direct me what should I be looking into to solve this issue? Please find below the log4j.properties file. zookeeper.root.logger= INFO, ROLLINGFILE zookeeper.console.threshold= INFO zookeeper.log.dir= /usr/local/zookeeper-3.4.5-cdh5.3.1/logs zookeeper.log.file= zookeeper.log zookeeper.log.threshold= INFO zookeeper.log.maxfilesize= 256MB zookeeper

zookeeper.log file not created inside logs directory

有些话、适合烂在心里 提交于 2021-02-07 13:39:43
问题 I'm unable to create zookeeper.log under the directory i had specified in the log4j.properties. I'm not sure what's going wrong, Can someone please direct me what should I be looking into to solve this issue? Please find below the log4j.properties file. zookeeper.root.logger= INFO, ROLLINGFILE zookeeper.console.threshold= INFO zookeeper.log.dir= /usr/local/zookeeper-3.4.5-cdh5.3.1/logs zookeeper.log.file= zookeeper.log zookeeper.log.threshold= INFO zookeeper.log.maxfilesize= 256MB zookeeper

Kafka not starting up if zookeeper.set.acl is set to true

佐手、 提交于 2021-02-07 10:50:36
问题 I have a set up of kerberized Zookeeper and kerberized Kafka which works fine with zookeeper.set.acl set to false. When I try to start Kafka with the parameter set to true, I get this in the zookeeper logs: Nov 12 13:36:26 <zk host> docker:zookeeper_corelinux_<zk host>[1195]: [2019-11-12 13:36:26,625] INFO Client attempting to establish new session at /<kafka ip>:54272 (org.apache.zookeeper.server.ZooKeeperServer) Nov 12 13:36:26 <zk host> docker:zookeeper_corelinux_<zk host>[1195]: [2019-11

Could not connect to ZooKeeper using Solr in localhost

帅比萌擦擦* 提交于 2021-01-28 11:45:27
问题 I'm using Solr 6 and I'm trying to populate it. Here's the main scala I put in place : object testChildDocToSolr { def main(args: Array[String]): Unit = { setProperty("hadoop.home.dir", "c:\\winutils\\") val sparkSession = SparkSession.builder() .appName("spark-solr-tester") .master("local") .config("spark.ui.enabled", "false") .config("spark.default.parallelism", "1") .getOrCreate() val sc = sparkSession.sparkContext val collectionName = "testChildDocument" val testDf = sparkSession.read

Could not connect to ZooKeeper using Solr in localhost

自古美人都是妖i 提交于 2021-01-28 11:38:21
问题 I'm using Solr 6 and I'm trying to populate it. Here's the main scala I put in place : object testChildDocToSolr { def main(args: Array[String]): Unit = { setProperty("hadoop.home.dir", "c:\\winutils\\") val sparkSession = SparkSession.builder() .appName("spark-solr-tester") .master("local") .config("spark.ui.enabled", "false") .config("spark.default.parallelism", "1") .getOrCreate() val sc = sparkSession.sparkContext val collectionName = "testChildDocument" val testDf = sparkSession.read

Kafka does not start blank output

故事扮演 提交于 2021-01-28 07:43:51
问题 Im workign to install Kafa and Zookeeper. I have already run the Zookeeper and it is currently running. I set up everything as in [https://dzone.com/articles/running-apache-kafka-on-windows-os] when i finally run in my cmd, .\bin\windows\kafka-server-start.bat .\config\server.properties there is no output, it just moves shows the next command line started. Please help me out. 回答1: Finally I find someone with with the same issue I had! Zookeeper running, but kafka not doing anything at all

Expiration of zookeeper persistent node

纵饮孤独 提交于 2021-01-28 06:53:23
问题 I am not able to find information regarding automatic expiration of persistent node in zookeeper. Is persistent node only expire when zookeeper server is shut down or it can expired before before that ?. if yes, what are the possible reason. Here I am asking about automatic expiration not manually deleting the node. 回答1: Persistent ZK nodes are saved on disk and preserved on service restarts and only deleted by request. Ephemeral nodes are deleted automatically on client disconnection. 来源: