cluster-computing

Clustering in ServiceMix 4

时间秒杀一切 提交于 2019-12-05 17:53:17
I'm trying to configure Apache ServiceMix 4 to provide load balancing feature mentioned in it's documentation (for example here: http://servicemix.apache.org/clustering.html ). Although it's mentioned, I couldn't find the exact way how to do it. The idea is to have 2 ServiceMixes (in LAN, for example) with the same OSGi service installed in them. When client tries to use the service, the load balancer takes him to appropriate service instance on one of the ServiceMixes. Is there an easy way to do that? Fabric8 ( http://fabric8.io/ ) can do Karaf/ServiceMix clustering and much more out of the

Need for service discovery for docker engine swarm mode

限于喜欢 提交于 2019-12-05 17:27:21
I'm confused about docker swarm. As far as I know, the old way to run a swarm was to run manager and workers in containers, before docker engine provided native support for swarm mode. Documentation for the old, containerized swarm explained how to setup service discovery using consul, etcd or zookeeper. Service discovery is necessary, as services are ran at random ports to avoid collisions, right? Documentation for the docker engine swarm mode doesn't explain how to setup service discovery. Now I'm confused, if the mechanism is included in swarm mode, or is the documentation incomplete. Where

How store/count individual cluster sizes and plot them in NetLogo

笑着哭i 提交于 2019-12-05 16:58:32
I have a model that generates clusters of yellow patches and I am interested in looking at the frequency distribution of cluster sizes. To do this I have co-opted the code from 'Patch Clusters Example' in the Code Library of NetLogo. It seems to be working (see photos below) in terms of finding the clusters (although I would prefer that it did not count green patches in the clusters), but I can't figure out how to get the sizes (or patch counts) of each of these clusters. Ideally I would like to make a histogram of the frequency distribution of cluster sizes (excluding green patches) and be

R: making cluster in doParallel / snowfall hangs

感情迁移 提交于 2019-12-05 15:41:01
I've got two servers on a LAN with fresh installs of Centos 6.4 minimal and R 3.0.1. Both computers have doParallel, snow, and snowfall packages installed. The servers can ssh to each other fine. When I attempt to make clusters in either direction, I get a prompt for a password, but after entering the password, it just hangs there indefinately. makePSOCKcluster("192.168.1.1",user="username") How can I troubleshoot this? edit: I also tried calling makePSOCKcluster on the above-mentioned computer with a host that IS capable of being used as a slave (from other computers), but it still hangs. So,

Difference between “SOCK”, “PVM”, “MPI”, and “NWS” for the R SNOW package

喜你入骨 提交于 2019-12-05 15:38:08
问题 The makeCluster function for the SNOW package has the different cluster types of " SOCK ", " PVM ", " MPI ", and " NWS " but I'm not very clear on the differences among them, and more specifically which would be best for my program. Currently I have a queue of tasks of different length going into a load balancing cluster with clusterApplyLB and am using a 64bit 32-core Windows machine. I am looking for a brief description of the differences among the four cluster types, which would be best

How often is a programmatic created EJB Timer executed in a cluster?

安稳与你 提交于 2019-12-05 13:16:59
In a clustered JEE6 environment (Glassfish 3.1.2), a @Singleton bean is/can be created on every cluster node. If this Singleton Bean registers a programmatic timer on its @PostConstruct - how often is the @Timeout method executed? -- on only one of that singletons (per tick), or once (per tick) for each Singeton that registered that a timer? Below the code is an example what this question mean to this code. @Singleton public class CachedService { @Resource private TimerService timerService; private static final long CACHE_TIMEOUT_DURATION_MS = 60 * 60 * 1000; @PostConstruct void initResetTimer

Is Erlang the C of the clustered computing world?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-05 13:11:32
Erlang seems to be very low level and performant on networks, but does not have a very rich type system or many of the things that other functional languages offer, so it seems to me that it will become the lowest level development language for clustered programming, until something else comes along and offers a decent clustered VM AND high level constructs. Any thoughts on this? Pete Kirkham C is the C of clustered computing. At least, every HPC cluster I've seen had lots of C and Fortran running MPI, and never Erlang. If anything, trends seem to be towards grid standards which are language

Websphere 7 clustered deployment

孤人 提交于 2019-12-05 12:58:05
We have a J2EE application as EAR file which is deployed in WAS 7, for making the application availability as high it needs to be deployed in 3 clusters. We have a Quartz Scheduler class whose job is to upload data from one database to another daily at 2:00 am. Now, the problem is if the ear will be deployed in 3 different nodes for load balancing and high availability, all the 3 ear file will trigger the upload at the same time. How we can handle this. Is it possible to do some configuration in WAS 7 environment. Any help/suggestion would be appreciated. Thanks You have two possibilities: The

Submit job with python code (mpi4py) on HPC cluster

a 夏天 提交于 2019-12-05 12:11:29
I am working a python code with MPI (mpi4py) and I want to implement my code across many nodes (each node has 16 processors) in a queue in a HPC cluster. My code is structured as below: from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() count = 0 for i in range(1, size): if rank == i: for j in range(5): res = some_function(some_argument) comm.send(res, dest=0, tag=count) I am able to run this code perfectly fine on the head node of the cluster using the command $mpirun -np 48 python codename.py Here "code" is the name of the python script and in the

how to specify error log file and output file in qsub

我是研究僧i 提交于 2019-12-05 12:02:52
问题 I have a qsub script as #####----submit_job.sh---##### #!/bin/sh #$ -N job1 #$ -t 1-100 #$ -cwd SEEDFILE=/home/user1/data1 SEED=$(sed -n -e "$SGE_TASK_ID p" $SEEDFILE) /home/user1/run.sh $SEED The problem is-- it puts all error and output files (job1.eJOBID & job1.oJOBID) to the same directory from where I am running qsub submit_job.sh while I want to save these file (output and error log file in same different place (specified as $SEED_output). I tried to change the line as /home/user1/run