cluster-computing

infinite wait during openMPI run on a cluster of servers?

こ雲淡風輕ζ 提交于 2019-12-11 23:04:36
问题 I have successfully set up the password less ssh between the servers and my computer. There is a simple openMPI program which is running well on the single computer. But ,unfortunately when i am trying this on a cluster ,neither i am getting a password prompt(as i have set up ssh authorization) nor the execution is moving forward. Hostfile looks like this, # The Hostfile for Open MPI # The master node, 'slots=8' is used because it has 8 cores localhost slots=8 # The following slave nodes are

HBase Cluster: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V

99封情书 提交于 2019-12-11 22:59:27
问题 I'm new to HBase. I'm running a HBase cluster on 2 machines (1 master on one machine and 1 regionserver on the second). When I start the hbase shell using: bin/hbase shell and I create a table using this syntax: create 't1', 'f1' I get the following errors: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hduser/hbase-0.98.8-hadoop2/lib/slf4j-log4j12- 1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local

running same script over many machines

倖福魔咒の 提交于 2019-12-11 21:09:17
问题 I have setup a few EC2 instances, which all have a script in the home directory. I would like to run the script simultaneously across each EC2 instance, i.e. without going through a loop. I have seen csshX for OSX for terminal interactive useage...but was wondering what the commandline code is to execute commands like ssh user@ip.address . test.sh to run the test.sh script across all instances since... csshX user@ip.address.1 user@ip.address.2 user@ip.address.3 . test.sh does not work... I

MarkLogic Cluster - Configure Forest with all documents

喜你入骨 提交于 2019-12-11 20:08:45
问题 We are working on MarkLogic 9.0.8.2 We are setting up MarkLogic Cluster (3 VMs) on Azure and as per failover design, want to have 3 forests (each for Node) in Azure Blob. I am done with Setup and when started ingestion, i found that documents are distributed across 3 forests and not stored all in each Forest. For e.g. i ingested 30000 records and each forest contains 10000 records. What i need is to have all forest with 30000 records. Is there any configuration (at DB or forest level) i need

Cluster stacked bargraph

倾然丶 夕夏残阳落幕 提交于 2019-12-11 19:38:50
问题 I want to plot something like this in R. I found some similar solution here so I tried something similar: test <- data.frame(person=c("group 1", "group 2", "group 3"), value1=c(100,150,120), # male value2=c(25,30,45) , # female value3=c(25,30,45), # male value4=c(100,120,150), # female value5=c(10,12,15), # male value6=c(50,40,70)) # female library(reshape2) # for melt melted <- melt(test, "person") melted$cat <- '' melted[melted$variable == 'value1' | melted$variable == 'value2',]$cat <-

What's the difference between h2o on multi-nodes and h2o on hadoop?

让人想犯罪 __ 提交于 2019-12-11 18:57:19
问题 In H2O site, it says H2O’s core code is written in Java. Inside H2O, a Distributed Key/Value store is used to access and reference data, models, objects, etc., across all nodes and machines. The algorithms are implemented on top of H2O’s distributed Map/Reduce framework and utilize the Java Fork/Join framework for multi-threading. Does this mean H2O will not work better than other libraries if it runs on single node cluster? But will work well on multiple nodes cluster. Is that right? Also

Reinstalling rocks

给你一囗甜甜゛ 提交于 2019-12-11 17:07:26
问题 I am new to using Rocks cluster. Recently, I tried to install a newer version of freetype . Before doing so, I did a yum remove freetype . On doing this all the softwares which were dependent on freetype were deleted including rocks . Later on, I found out that yum remove removes packages dependent on the packages to be deleted. So now, on doing rocks list roll I get rocks: command not found . Whereas all data remains intact although the file system is not getting mounted on the compute nodes

How to cluster bubble chart with many groups

非 Y 不嫁゛ 提交于 2019-12-11 16:58:41
问题 I'm trying to imitate the following effects: Orginal version V3: https://bl.ocks.org/mbostock/7881887 Converted to V4: https://bl.ocks.org/lydiawawa/1fe3c80d35e046c1636663442f34680b/86d1bda1dabb7f3a6d11cb1a16053564078ed964 An example used dataset: https://jsfiddle.net/hf998do7/1/ This is what I have so far : https://blockbuilder.org/lydiawawa/0899a02cc86f2274f52e27064bc86500 I want to make a bubble graph that shows clusters of Race, the size of the bubbles are assigned by BMI. The dots will

“qsub -now” equivalent using bsub

坚强是说给别人听的谎言 提交于 2019-12-11 16:49:29
问题 In SGE , we have qsub -now yes/no <command> By "-now yes" the job is scheduled immediately(if possible) or not at all . We are not put in pending queue . By "-now no " the job is put in pending queue if it cannot be executed immediately . But in LSF , we have qsub's equivalent as bsub . in bsub, we are put in pending queue, if it cannot be executed immediately. We don't have option as "-now yes" as in qsub . Do we something in bsub as "qsub -now" P.S : One solution is that we can check for

Unable to retrieve physical size of available storage for cluster

£可爱£侵袭症+ 提交于 2019-12-11 14:13:57
问题 I am half way down with my work and now stuck. I am trying to fetch information about available storage devices for a cluster. I am able to fetch the list of available storage devices but unable to retrieve the physical disk, available free space, etc of these available storage. I want like this. Is there any command to fetch physical disk name from Cluster Disk Name or directly can I get the disk details. For Shared Disk I am able to retrieve the details (Get-ClusterSharedVolume) but not for