apache-zookeeper

ClassNotFoundException for Zookeeper while building Storm

余生颓废 提交于 2019-12-02 00:53:14
I'm new to java and Storm so please forgive any obvious mistakes. I'm trying to run storm with a flume connector but It crashes with the following error: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:297) at java.lang.Thread.run(Thread.java:744) Caused

Kafka topic no longer exists after restart

☆樱花仙子☆ 提交于 2019-12-02 00:27:47
I created a topic in my local kafka cluster with 3 servers / brokers by running the following from my kafka installation directory bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic user-activity-tracking-pipeline Everything worked fine as I was able to produce and consumer messages from my topic. After restarting my machine, I started bundled zookeeper from kafka installation directory by running the following in the terminal bin/zookeeper-server-start.sh config/zookeeper.properties Started 3 servers belonging to the cluster by executing the

pySpark Kafka Direct Streaming update Zookeeper / Kafka Offset

社会主义新天地 提交于 2019-12-02 00:22:03
currently I'm working with Kafka / Zookeeper and pySpark (1.6.0). I have successfully created a kafka consumer, which is using the KafkaUtils.createDirectStream() . There is no problem with all the streaming, but I recognized, that my Kafka Topics are not updated to the current offset, after I have consumed some messages. Since we need the topics updated to have a monitoring here in place this is somehow weird. In the documentation of Spark I found this comment: offsetRanges = [] def storeOffsetRanges(rdd): global offsetRanges offsetRanges = rdd.offsetRanges() return rdd def printOffsetRanges

best way to copy data across 2 zookeeper cluster?

依然范特西╮ 提交于 2019-12-01 21:16:46
问题 I have some zookeeper clusters on running, my goal is to combine data on these zk clusters onto one single zookeeper cluster. So copy the whole data and log dir of one zk cluster to another zk cluster is not a possible approaching for me. And also, I might need to rebase one entire dir's path, for example, I might need to copy data for /service1 on zk cluster1 onto /c1/service1 on zk cluster 2 currently, I am doing this work by write some zk client code to read the entire dir tree with data

Resolving the Mesos Leading Master

别来无恙 提交于 2019-12-01 19:42:34
问题 We're using Mesos to run jobs on a cluster. We're using haproxy to point, e.g., mesos.seanmcl.com to a Mesos Master. If that Master happens to not be the leader, the UI will redirect the browser, after a delay , to the leader so you can see the running jobs. For various reasons (UI speed, avoiding ports blocked by a firewall), I'd really like to programmatically discover the host with the leader. I can not figure out how to do this. I grepped around in the Zookeeper files for Mesos, but only

Pull from Cassandra database whenever any new rows or any new update is there?

廉价感情. 提交于 2019-12-01 19:07:37
I am working on a system in which I need to store Avro Schemas in Cassandra database. So in Cassandra we will be storing something like this SchemaId AvroSchema 1 some schema 2 another schema Now suppose as soon as I insert another row in the above table in Cassandra and now the table is like this - SchemaId AvroSchema 1 some schema 2 another schema 3 another new schema As soon as I insert a new row in the above table - I need to tell my Java program to go and pull the new schema id and corresponding schema.. What is the right way to solve these kind of problem? I know, one way is to have

Error HBASE-ZOOKEEPER : Too many connections

可紊 提交于 2019-12-01 18:03:58
问题 I am using Hbase-Hadoop combination for my application along with Data Nucleus as the ORM. When I am trying to access hbase via several threads at a single time. It throws exceptions as : Exception in thread "Thread-26" javax.jdo.JDODataStoreException org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK

Zookeeper issue in setting kafka

房东的猫 提交于 2019-12-01 17:58:23
To install kafka , I downloaded the kafka tar folder. To start the server I tried this command : bin/zookeeper-server-start.sh config/zookeeper.properties The following error occured on entering the above command: INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2014-08-21 11:53:55,748] FATAL Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain) org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties at org.apache.zookeeper.server.quorum

No JAAS configuration section named 'Server' was foundin '/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf'

﹥>﹥吖頭↗ 提交于 2019-12-01 17:37:32
问题 when i run the zookeeper from the package in the kakfa_2.12-2.3.0 i am getting the following error $ export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf" $ ./bin/zookeeper-server-start.sh config/zookeeper.properties and the zookeeper_jaas.conf is KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret"; }; and the zookeeper.properties file is server

Twitter storm example running in local mode cannot delete file

馋奶兔 提交于 2019-12-01 17:25:59
I am running the storm starter project ( https://github.com/nathanmarz/storm-starter ) and it throws the following error after running for a little while. 23135 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died java.io.IOException: Unable to delete file: C:\Users\[user directory]\AppData\Local\Temp\a0894222-6a8a-4f80-8655-3ad6a0c10021\version-2\log.1 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:1390) at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1044) at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:977