cluster-computing

ActiveMQ network of brokers: random conectivity with rebalanceClusterClients and updateClusterClients

夙愿已清 提交于 2019-12-25 04:22:02
问题 I have a network of brokers with the following configuration <transportConnectors> <transportConnector name="tomer-amq-test2" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true" updateClusterClientsOnRemove="true"/> </transportConnectors> I expect that when I connect using the URL failover:\(tcp://tomer-amq-test2:61616\)?backup=true the broker shall update th client with the full list of brokers it is conencted too, and the client shall connect to one randomly

Build MPICH2 from source

风格不统一 提交于 2019-12-25 03:36:21
问题 As a follow-up of this question, I started building MPICH2 from source. I found this tutorial: Installing MPICH2 on a Single Machine and so far what I did is this: ./configure --disable-f77 --disable-fc --disable-fortran [seems to be OK] make; sudo make install [long output with one warning] libtool: warning: relinking 'lib/libmpicxx.la' root@pythagoras:/home/gsamaras/mpich-3.1.4# mpich2version bash: mpich2version: command not found What am I doing wrong? Notice that I had first installed

MSMQ Cluster losing messages on failover

夙愿已清 提交于 2019-12-25 02:28:06
问题 I've got a MSMQ Cluster setup with nodes (active/passive) that share a drive. Here are the tests I'm performing. I send messages to the queue that are recoverable. I then take the MSMQ cluster group offline and then bring it online again. Result: The messages are still there. I then simulate failover by moving the group to node 2. Moves over successfully, but the messages aren't there. I'm sending the messages as recoverable and the MSMQ cluster group has a drive that both nodes can access.

Mysql cluster 7.2.2 ignored settings and TABLE IS FULL error

柔情痞子 提交于 2019-12-25 00:07:52
问题 This is my my.cnf settings [mysqld] ndbcluster #engine_condition_pushdown=0 optimizer_switch=engine_condition_pushdown=off # IP address of the cluster management node ndb-connectstring=127.0.0.1 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=127.0.0.1 [ndbd default] NoOfReplicas= 2 MaxNoOfConcurrentOperations= 10000 DataMemory= 320M IndexMemory= 96M TimeBetweenWatchDogCheck= 30000 DataDir= /usr/local/mysql-cluster-gpl-7.2.2-osx10.6-x86_64/mysql-cluster

Convert Array[DenseVector] to CSV with Scala

╄→尐↘猪︶ㄣ 提交于 2019-12-24 21:06:21
问题 I am using Kmeans Spark function with Scala and I need to save the Cluster Centers obtained into a CSV. This val is type: Array[DenseVector] . val clusters = KMeans.train(parsedData, numClusters, numIterations) val centers = clusters.clusterCenters I was trying converting centers to a RDD file and then from RDD to DF, but I get a lot of problems (e.g, import spark.implicits._ / SQLContext.implicits._ is not working and I cannot use .toDF ). I was wondering if there is another way to make a

Cluster with Passport NodeJs

北战南征 提交于 2019-12-24 18:06:25
问题 In our application, we use a Passport middleware (localStrategy, sessions are stored in a MongoStore). We decided to use Clusters in order to speed up and ease on a server. The problem is that after this change, passport always is in "not authorized" state. Is it possible to use passport (localStrategy) with clustering? 回答1: Passport store session data in memory. With Clustering, It is possible for passport to store data in a cluster and other requests handle by other clusters. You must use a

How to open a JChannel (JGroups) using Openshift Wildfly 8 Cartridge

时光毁灭记忆、已成空白 提交于 2019-12-24 14:34:46
问题 We are developing a Java-EE Application running on Wildfly 8 in the Wildfly-Openshift-Cartridge. The Openshift Application is set to scaled, therefore, more than one JVM node will be running as a cluster. The communicate between the different nodes, I like to use JGroups. However, I fail to open a new JChannel even though the JGroups itself seems to work. The application itself does also work, if I deploy the application locally to e.g. 3 wildfly standalone instances. The default standalone

Zookeeper - upgrade from standalone to quorum

白昼怎懂夜的黑 提交于 2019-12-24 08:39:05
问题 Currently I have a standalone ZK instances used in a test system. But this test system has become production system and i would like to upgrade from 1 ZK instance to 3 without compromising availability of the SolrCloud system that ZK is overseeing. From what i've read upgrading from 3 to 5 and so on is pretty easy using rolling restarts, but haven't found any info on going from standalone (1 instance) to 3. Does anyone have any insight on this (anyone who might have tried it)? Thanks! 回答1: I

Hazelcast distributed map processor execution on a single node

女生的网名这么多〃 提交于 2019-12-24 06:20:36
问题 I use Hazelcast within Spring Boot MVC application that supports high availability, it has 4 instances of the same logic which run as active-active. All of the 4 share one distributed map of objects. As a result of user action (access to specific controller) I trigger a EntryProcessor (map.submitToKey) on the shared map. I thought that such action would run the processor only once, on a single node, but instead all of the 4 nodes run the same processor at the same time. Is there an option to

GlusterFS as shared storage for ActiveMQ master/slave cluster

感情迁移 提交于 2019-12-24 05:37:12
问题 I want to setup an ActiveMQ cluster. As I encountered problems with shared nothing approach, I'd like to do it using shared filesystem. However, the ActiveMQ documentation warns about possible problems related to filesystem locks. As I'm not sure, I'd like to ask, if GlusterFS would be a good choice for shared filesystem. 回答1: Shared-storage master-slave requires that the underlying file system supports network file locks. GlusterFS seems to support network locks going by the documentation