cluster-computing

why can i run spark app in eclipse directly without spark-submit

我怕爱的太早我们不能终老 提交于 2019-12-11 02:16:02
问题 1.My spark(standalone) cluster: spmaster,spslave1,spslave2 2.For my simple spark app which selects some records from mysql. public static void main(String[] args) { SparkConf conf = new SparkConf() .setMaster("spark://spmaster:7077") .setAppName("SparkApp") .set("spark.driver.extraClassPath","/usr/lib/spark-1.6.1-bin-hadoop2.6/lib/mysql-connector-java-5.1.24.jar") //the driver jar was uploaded to all nodes .set("spark.executor.extraClassPath","/usr/lib/spark-1.6.1-bin-hadoop2.6/lib/mysql

How does OpenMPI Secure SHell into all the compute nodes from the master node?

隐身守侯 提交于 2019-12-11 02:13:50
问题 First time working with OpenMPI. I am curious how the API invokes a run-time environment to run on compute nodes. I am thinking about setting up a Linux cluster of 4 or 5 nodes. I read a lot of the documentation on creating password-less ssh access for the master node. Does OpenMPI invoke a command line argument to ssh into whatever compute nodes are declared inside the --hostfile and then begin spreading tasks? 回答1: Open MPI does not add any additional arguments (by default) when ssh'ing to

Adding Color and Hover Options to VisNetwork Igraph

纵饮孤独 提交于 2019-12-11 01:49:37
问题 I have been having trouble with this. I can only get one or the other but not both options in one graph. Below is the code and I received a lot of help from @lukeA to get me to this point. I have the following graph in which I can get the cluster colors into the visNetwork Igraph : library(igraph) library(visNetwork) B = matrix( c(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 47, 3, 0, 3, 0, 1, 10, 13, 5, 0, 3, 19, 0, 1, 0, 1, 7, 3, 1, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 1, 0, 32, 0, 0, 3, 2, 1, 0, 0, 0,

Clustering doesn't work

大兔子大兔子 提交于 2019-12-11 01:39:50
问题 I config clustering for two tomcat using apache at front and mod_jk as connector. I tried a test application to check the configuration and it works fine. Session are being successfully replicated and failover is detected successfully. But when i tried this for my actual application, it does not work. I made the modification in httpd.conf accordingly and very carefully. There is no exception,no error in the logs. I am unable to track the problem. Initially i was getting

Clustered MSMQ - Invalid queue path name when sending

邮差的信 提交于 2019-12-11 01:13:34
问题 We have a two node cluster running on Windows 2008 R2. I've installed MSMQ with the Message Queue Server and Directory Service Integration options on both nodes. I've created a clustered MSMQ resource named TESTV0Msmq (we use transactional queues so a DTC resource had been created previously). The virtual resource resolves correctly when I ping it. I created a small console executable in c# using the MessageQueue contructor to allow us to send basic messages (to both transactional and non

Issue when joining serf nodes located in different Docker containers

六眼飞鱼酱① 提交于 2019-12-10 23:16:01
问题 Context: Host is AWS-EC2 / Ubuntu 14.04.5 with Docker version 17.05.0-ce. Containers are built from publicly available repo image cbhihe/serf-alpine-bash . All containers are located on the same EC2 instance and share the same default bridge network with net-interface "docker0". Trying to join nodes serfDC1 (id d4fd90692e18) and serfDC2 (id 6353e7f6134d), by passing cmds from the host's shell: $ docker exec serfDC1 serf agent -node=Node1 -bind=0.0.0.0:7946 ==> Starting Serf agent… ==>

bash: /usr/bin/hydra_pmi_proxy: No such file or directory

六眼飞鱼酱① 提交于 2019-12-10 21:58:56
问题 I am struggling to set up an MPI cluster, following the Setting Up an MPICH2 Cluster in Ubuntu tutorial. I have something running and my machine file is this: pythagoras:2 # this will spawn 2 processes on pythagoras geomcomp # this will spawn 1 process on geomcomp The tutorial states: and run it (the parameter next to -n specifies the number of processes to spawn and distribute among nodes): mpiu@ub0:~$ mpiexec -n 8 -f machinefile ./mpi_hello With -n 1 and -n 2 it runs fine, but with -n 3, it

Cassandra loss of a node

对着背影说爱祢 提交于 2019-12-10 15:48:40
问题 I'm trying to figure out, how to parameter my 2 nodes cluster, in order to have an exact replica, if one of them is down... using this tools to check it out : http://www.ecyrd.com/cassandracalculator/ For the following parameters : Cluster size: 2 / Replication Factor: 2 / Write Level: All / Read Level: One it gives me the results : Your reads are consistent You can survive the loss of no nodes . You are really reading from 1 node every time . You are really writing to 2 nodes every time .

How best to file lock in Java cluster

你。 提交于 2019-12-10 15:34:12
问题 I have a cluster of servers running on JBoss. I need to update a file in a safe manner. To be specific, I need to lock a file A -- blocking if it is already locked, in a safe manner so that if the JVM was to die suddenly there would no dangling locks. A 30 second timeout would be fine. read the file A change the contents write the file to a temp name A.tmp delete the original file A rename the A.tmp to the proper name A unlock the file A When I look at java.nio.FileLock, it seems to be

Serializing in java: automatic thread-safety?

核能气质少年 提交于 2019-12-10 14:52:07
问题 If you serialize an object in Java and send it (over a socket) to nodes in a cluster do you automatically get thread safety? Say you have a cluster and each node has several cores. The server has a Java Object that it wants to send to each core on each cluster to process. It serializes that object and sends it to each receiver. Through serializing, is that object automatically somewhat "deep copied" and do you automatically get thread safety on that object? You aren't going to get any