cluster-computing

Clustered WildFly 10 domain messaging

六月ゝ 毕业季﹏ 提交于 2019-12-08 01:57:05
问题 I have three machines located in different networks: as-master as-node-1 as-node-2 In as-master I have WildFly as domain host-master and the two nodes have WildFly as domain host-slave each starting an instance in the full-ha server group. From the as-master web console I can see the two nodes in the full-ha profile runtime and if I deploy a WAR it gets correctly started on both nodes. Now, what I'm trying to achieve is messaging between the two instances of the WAR, i.e. sending a message

what is the minimum number of computers for a slurm cluster

非 Y 不嫁゛ 提交于 2019-12-08 00:25:20
问题 I would like to setup a SLURM cluster. How many machines do I need at minimum? Can I start with 2 machines (one being only client, and one being both client and server)? 回答1: You can start using only one machine, but 2 machines will be the most standard configuration, being one machine the controller and the other the "worker" node. With this model you can add as many machines to the cluster being "worker" nodes. This way the server will not execute jobs, and will be not suffering jobs

How to call MATLAB executable for Python on cluster?

て烟熏妆下的殇ゞ 提交于 2019-12-07 23:16:07
问题 I am using a python-matlab-bridge that calls MATLAB from python by starting it on a ZMQ socket. On my own computer, I hand the bridge the location of the executable (in this case MATLAB 2014B): executable='/Applications/MATLAB_R2014b.app/bin/matlab' and everything works as required and the printed statement is: Starting MATLAB on ZMQ socket ipc:///tmp/pymatbridge-49ce56ed-f5b4-43c4-8d53-8ae0cd30136d Now I want to do the same on a cluster. Through module avail I find there are two MATLAB

Failover support for a DB

倾然丶 夕夏残阳落幕 提交于 2019-12-07 17:00:43
问题 We are currently evaluating failover support in different databases. We were earlier using HSQLDB but it seems that it does not have clustering/replication support. Our requirement is simply to have two database servers, one being only for synchronous backup but if the primary server is down, then the secondary should automatically start acting as the primary server. Has anyone evaluated MySQL, PostgreSQL or any other DB server for such a use case? Edit: We had thought of using MySQL cluster

Snakemake - Override LSF (bsub) cluster config in a rule-specific manner

我的梦境 提交于 2019-12-07 15:46:30
Is it possible to define default settings for memory and resources in cluster config file, and then override in rule specific manner, when needed? Is resources field in rules directly tied to cluster config file? Or is it just a fancy way for params field for readability purposes? In the example below, how do I use default cluster configs for rule a , but use custom changes ( memory=40000 and rusage=15000 ) in rule b ? cluster.json: { "__default__": { "memory": 20000, "resources": "\"rusage[mem=8000] span[hosts=1]\"", "output": "logs/cluster/{rule}.{wildcards}.out", "error": "logs/cluster/

white pixels cluster extraction

北战南征 提交于 2019-12-07 13:30:59
问题 I am working on a fingerprint pore extraction project and stuck at the last stage of pore (white pixels clusters) extraction..I am having two output images from which we will get the pores but don't know how to do it..also the two images are of different size..image1 of size 240*320 and image2 is of size 230*310 ..here are my images.. image 1 (240*320) image2 (230*310) here is what i am doing to extract white clusters of pores.. for i = 1:230 for j = 1:310 if image1(i,j)==1 && image2(i,j)==1

Need for service discovery for docker engine swarm mode

Deadly 提交于 2019-12-07 11:37:24
问题 I'm confused about docker swarm. As far as I know, the old way to run a swarm was to run manager and workers in containers, before docker engine provided native support for swarm mode. Documentation for the old, containerized swarm explained how to setup service discovery using consul, etcd or zookeeper. Service discovery is necessary, as services are ran at random ports to avoid collisions, right? Documentation for the docker engine swarm mode doesn't explain how to setup service discovery.

Websphere 7 clustered deployment

*爱你&永不变心* 提交于 2019-12-07 11:15:46
问题 We have a J2EE application as EAR file which is deployed in WAS 7, for making the application availability as high it needs to be deployed in 3 clusters. We have a Quartz Scheduler class whose job is to upload data from one database to another daily at 2:00 am. Now, the problem is if the ear will be deployed in 3 different nodes for load balancing and high availability, all the 3 ear file will trigger the upload at the same time. How we can handle this. Is it possible to do some configuration

Submit job with python code (mpi4py) on HPC cluster

↘锁芯ラ 提交于 2019-12-07 10:51:53
问题 I am working a python code with MPI (mpi4py) and I want to implement my code across many nodes (each node has 16 processors) in a queue in a HPC cluster. My code is structured as below: from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() count = 0 for i in range(1, size): if rank == i: for j in range(5): res = some_function(some_argument) comm.send(res, dest=0, tag=count) I am able to run this code perfectly fine on the head node of the cluster using the

Clustering in ServiceMix 4

♀尐吖头ヾ 提交于 2019-12-07 09:51:30
问题 I'm trying to configure Apache ServiceMix 4 to provide load balancing feature mentioned in it's documentation (for example here: http://servicemix.apache.org/clustering.html). Although it's mentioned, I couldn't find the exact way how to do it. The idea is to have 2 ServiceMixes (in LAN, for example) with the same OSGi service installed in them. When client tries to use the service, the load balancer takes him to appropriate service instance on one of the ServiceMixes. Is there an easy way to