apache-zookeeper

Twitter storm example running in local mode cannot delete file

夙愿已清 提交于 2019-12-01 17:08:43
问题 I am running the storm starter project (https://github.com/nathanmarz/storm-starter) and it throws the following error after running for a little while. 23135 [main] ERROR org.apache.zookeeper.server.NIOServerCnxn - Thread Thread[main,5,main] died java.io.IOException: Unable to delete file: C:\Users\[user directory]\AppData\Local\Temp\a0894222-6a8a-4f80-8655-3ad6a0c10021\version-2\log.1 at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:1390) at org.apache.commons.io.FileUtils

org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry

空扰寡人 提交于 2019-12-01 14:24:06
I am developing Spring Boot + Apache Kafka + Apache Zookeeper example. I've installed/setup Apache Zookeeper and Apache Kafka on my local windows machine. I've taken a reference from link: https://www.tutorialspoint.com/spring_boot/spring_boot_apache_kafka.htm and executed code as is: Setup: https://medium.com/@shaaslam/installing-apache-kafka-on-windows-495f6f2fd3c8 Error: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TimeoutException:

Integrating Hbase with Hive: Register Hbase table

自闭症网瘾萝莉.ら 提交于 2019-12-01 13:19:44
I am using Hortonworks Sandbox 2.0 which contains the following version of Hbase and Hive Component Version ------------------------ Apache Hadoop 2.2.0 Apache Hive 0.12.0 Apache HBase 0.96.0 Apache ZooKeeper 3.4.5 ...and I am trying to register my hbase table into hive using the following query CREATE TABLE IF NOT EXISTS Document_Table_Hive (key STRING, author STRING, category STRING) STORED BY ‘org.apache.hadoop.hive.hbase.HBaseStorageHandler’ WITH SERDEPROPERTIES (‘hbase.columns.mapping’ = ‘:key,metadata:author,categories:category’) TBLPROPERTIES (‘hbase.table.name’ = ‘Document’); This does

Consuming from Kafka failed Iterator is in failed state

假如想象 提交于 2019-12-01 09:35:45
I am getting exception while consuming the messages from kafka. org.springframework.messaging.MessagingException: Consuming from Kafka failed; nested exception is java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Iterator is in failed state I have one consumer in the application context with one outbound adapter. Consumer configuration in application context <int-kafka:consumer-context id="consumerContext" consumer-timeout="4000" zookeeper-connect="zookeeperConnect"> <int-kafka:consumer-configurations> <int-kafka:consumer-configuration group-id="GR1" value-decoder=

How to escape forward slash in java so that to use it in path

自作多情 提交于 2019-12-01 08:31:09
I am trying to escape forward slash in String which can be used in path using Java. For example: String:: "Test/World" Now I want to use above string path.At the same time I have to make sure that "Test/World" will come as it is in path. Sorry if its duplicate but I couldn't find any satisfactory solution for this. My purpose is to use above string to create nodes in Zookeeper. Example: If I use following string to create node in Zokkeeper then I should get "Test/World" as a single node not separate. Zookeeper accepts "/" as path separator which in some cases I dont require. /zookeeper

Zookeeper ensemble not coming up

回眸只為那壹抹淺笑 提交于 2019-12-01 06:27:49
I am trying to configure the ensemble of 3 nodes following the documentation . All of them are on Linux Ubuntu. on all the three nodes configuration file looks like this : zoo.cfg under $ZOOKEEPER_HOME/conf tickTime=2000 dataDir=/home/zkuser/zookeeper_data clientPort=2181 initLimit=5 syncLimit=2 server.1=ip.of.zk1:2888:3888 server.2=ip.of.zk2:2888:3888 server.3=ip.of.zk3:2888:3888 I've also placed respective " myid " files under /home/zkuser/zookeeper_data/ directory. This myid files contain 1 which is on node (ip.of.zk1), so on and so forth. When I start the zk server using bin/zkServer.sh

Is a ZooKeeper snapshot file enough to restore state?

耗尽温柔 提交于 2019-12-01 06:27:26
I am learning about ZooKeeper and looking at options to back up data stored in ZooKeeper. ZooKeeper writes two data files, snapshot and transaction log. It is often mentioned that snapshots are "fuzzy" and need a transaction log to be replayed over them to get an up to date state. In the case of Observers, no transaction log is persisted to disk. If I were to take the snapshot written by an observer (or leader/follower without the transaction log), and placed it into a new standalone ZooKeeper, would ZooKeeper's state be guaranteed to be the same as it was when the snapshot was written to disk

Is a ZooKeeper snapshot file enough to restore state?

喜夏-厌秋 提交于 2019-12-01 05:38:48
问题 I am learning about ZooKeeper and looking at options to back up data stored in ZooKeeper. ZooKeeper writes two data files, snapshot and transaction log. It is often mentioned that snapshots are "fuzzy" and need a transaction log to be replayed over them to get an up to date state. In the case of Observers, no transaction log is persisted to disk. If I were to take the snapshot written by an observer (or leader/follower without the transaction log), and placed it into a new standalone

I need my Spring Boot WebApplication to restart in JUnit

╄→尐↘猪︶ㄣ 提交于 2019-12-01 05:35:38
Without going into excruciating detail, I am having an issue when I run my Junit tests all at once. If I run them class by class, everything is great! Otherwise I have trouble because I cannot restart my WebApplication inbetween junit-test-class. This causes me to have Zookeeper server clients in my WebApplication that hang around after I go through the shutdown and startup of the Zookeeper server in-between classes. Those Zookeeper server clients can take a while to resync with server and this causes unpredictable behavior... Is there a way to have my SpringBootServletInitializer class

Hbase managed zookeeper suddenly trying to connect to localhost instead of zookeeper quorum

时光怂恿深爱的人放手 提交于 2019-12-01 05:25:04
I was running some tests with table mappers and reducers on large scale problems. After a certain point my reducers started failing when the job was 80% done. From what I can tell when looking at the syslogs the problem is that one of my zookeepers is attempting to connect to the localhost as opposed to the other zookeepers in the quorum Oddly it seems to do just fine connecting to the other nodes when mapping is going on, its reducing that it has a problem with. Here are selected portions of the syslog which might be relevant to figuring out whats going on 2014-06-27 09:44:01,599 INFO [main]