apache-storm

Monitoring Kafka Spout with KafkaOffsetMonitoring tool

别来无恙 提交于 2019-12-12 10:12:04
问题 I am using the kafkaSpout that came with storm-0.9.2 distribution for my project. I want to monitor the throughput of this spout. I tried using the KafkaOffsetMonitoring, but it does not show any consumers reading from my topic. I suspect this is because I have specified the root path in Zookeeper for the spout to store the consumer offsets. How will the kafkaOffsetMonitor know that where to look for data about my kafkaSpout instance? Can someone explain exactly where does zookeeper store

How to write logs to a file using Log4j and Storm Framework?

谁说胖子不能爱 提交于 2019-12-12 08:03:56
问题 I am having bit of an issue in logging to a file using log4j in storm . Before submitting my topology , i.e in my main method I wrote some log statements and configured the logger using : PropertyConfigurator.configure(myLog4jProperties) Now when I run my topology using my executable jar in eclipse - its working fine and log files are being created as supposed. OR When i run my executable jar using "java -jar MyJarFile someOtherOptions", i can see log4j being configured and the files are

what is use of Tuple.getStringByField(“ABC”) in Storm

孤街醉人 提交于 2019-12-12 06:03:30
问题 I am not able to understand the use of the Tuple.getStringByField("ABC") in Apache Storm. The following is the code: Public Void execute(Tuple input){ try{ if (input.getSourceStreamId.equals("signals")) { str=input.getStringByField("action") if ("refresh".equals(str)) {....} } }... Here what is input.getStringByField("action") is doing exactly.. Thank you. 回答1: In storm, both spout and bolt emit tuple. But the question is what are contained in each tuple. Each spout and bolt can use the below

Using the storm hdfs connector to write data into HDFS

☆樱花仙子☆ 提交于 2019-12-12 05:47:53
问题 The source code for the "storm-hdfs connector" that can be used to write data into HDFS. The github url is : https://github.com/ptgoetz/storm-hdfs There is a particular topology: "HdfsFileTopology" used to write '|' delimited data into HDFS. link: https://github.com/ptgoetz/storm-hdfs/blob/master/src/test/java/org/apache/storm/hdfs/bolt/HdfsFileTopology.java I have questions about the part of the code: Yaml yaml = new Yaml(); InputStream in = new FileInputStream(args[1]); Map<String, Object>

Apache storm : Could not load main class org.apache.storm.starter.ExclamationTopology

天涯浪子 提交于 2019-12-12 05:14:58
问题 firstly I already referred quite a few similar questions but still haven't been able to fix it. I have installed nimbus and supervisor properly and there were no errors while "make install" even maven clean install and compile had no errors at all, even my 0qm is set up properly with jzmq, and also started my nimbus by ./storm nimbus and started my supervisor by ./storm supervisor but when I do ./storm jar ~/ccbd-work/storm2/examples/target/storm-starter-topologies-0.10.0.jar org.apache.storm

KafkaSpout tuple replay throws null pointer exception

社会主义新天地 提交于 2019-12-12 04:31:36
问题 I am using storm 1.0.1 and Kafka 0.10.0.0 with storm-kafka-client 1.0.3. please find the code config I have below. kafkaConsumerProps.put(KafkaSpoutConfig.Consumer.KEY_DESERIALIZER, "org.apache.kafka.common.serialization.ByteArrayDeserializer"); kafkaConsumerProps.put(KafkaSpoutConfig.Consumer.VALUE_DESERIALIZER, "org.apache.kafka.common.serialization.ByteArrayDeserializer"); KafkaSpoutStreams kafkaSpoutStreams = new KafkaSpoutStreamsNamedTopics.Builder(new Fields(fieldNames), topics) .build(

storm multilang seems only process 4Mb spout, and then stop

送分小仙女□ 提交于 2019-12-12 04:09:26
问题 I'am using the storm's multilang throughing PHP. But it seems have some problem. Then my spout is a php script which read contents from a file.And in the beginning of the 4Mb content,which runs correctly. But then php process will blocks in the write(1,xxxx...,when i strace -p the php spout. But the next bolt process is blocked in the read(0, i doubt this is the storm's problem,but the question is why each time the spout analyse is 4Mb, and is it a java processBuild's deadlock? How to avoid

[Storm][DRPC] Request failed

让人想犯罪 __ 提交于 2019-12-12 03:32:24
问题 We work with storm and use the DRPC to execute some algorithms on the cluster. When the duration time on each node is less than 60 seconds, there is no trouble: client receives correct result. However when we have to solve bigger problem with the same algorithm (then duration time is more than 60 seconds) we have the following message: Exception in thread "main" DRPCExecutionException(msg:Request failed) at backtype.storm.generated.DistributedRPC$execute_result$execute_resultStandardScheme

Upgrade version of storm

大城市里の小女人 提交于 2019-12-12 03:25:40
问题 I'm new in both of storm and Ubuntu . I've problem with upgrading . I deleted the old version folder only from the directory but really I guess it's a bad way ! because after installed new version and trying to run it , i found empty folder of old version created! How can I solve this problem? Or how can I delete storm well? 来源: https://stackoverflow.com/questions/33851674/upgrade-version-of-storm

Metrics,access and custom logs files are empty after submitted

前提是你 提交于 2019-12-12 02:38:14
问题 After i submitted topology i found in path/of/storm/logs some files like nimbus,supervisor,ui,drpc,metrics,custom and access but access,custom and metrics are empty , i'm asking about how can i access them why they are empty what is the benfit from it 回答1: access , custom , and metrics are log files for different Storm features. For example: https://storm.apache.org/documentation/Metrics.html If you do not know those features and do not use them, you can simply ignore those files. 来源: https:/