cluster-computing

Get the Master Ip Address from Hazelcast grid

↘锁芯ラ 提交于 2020-01-24 10:29:07
问题 I would like to get the masterIpAddress use on the Hazelcast node in HazelcastInsatnceImpl from an instance HazelcastInsatnce. Somebody know how to do that? Thanks for your help 回答1: There is no real master in Hazelcast clusters. The oldest node plays some kind of a special role so you can imagine this one as the "master". To get this node get retrieve the first element from the memberlist. Cluster cluster = hazelcastInstance.getCluster(); Set<Member> members = cluster.getMembers(); Member

Horizontal scaling of JSF 2.0 application

一个人想着一个人 提交于 2020-01-22 14:26:32
问题 Given that JavaServer Faces is inherently stateful on the server side, what methods are recommended for horizontally scaling a JSF 2.0 application? If an application runs multiple JSF servers, I can imagine the following scenarios: Sticky Sessions: send all requests matching a given session to the same server. Question: what technology is commonly used to achieve this? Problem: server failure results in lost sessions... and generally seems like fragile architecture, especially when starting

Spark - How to run a standalone cluster locally

筅森魡賤 提交于 2020-01-22 04:44:27
问题 Is there the possibility to run the Spark standalone cluster locally on just one machine (which is basically different from just developing jobs locally (i.e., local[*] ))?. So far I am running 2 different VMs to build a cluster, what if I could run a standalone cluster on the very same machine, having for instance three different JVMs running? Could something like having multiple loopback addresses do the trick? 回答1: yes you can do it, launch one master and one worker node and you are good

R cluster analysis and dendrogram with correlation matrix

℡╲_俬逩灬. 提交于 2020-01-21 09:21:50
问题 I have to perform a cluster analysis on a big amount of data. Since I have a lot of missing values I made a correlation matrix. corloads = cor(df1[,2:185], use = "pairwise.complete.obs") Now I have problems how to go on. I read a lot of articles and examples, but nothing really works for me. How can I find out how many clusters are good for me? I already tried this: dissimilarity = 1 - corloads distance = as.dist(dissimilarity) plot(hclust(distance), main="Dissimilarity = 1 - Correlation",

Running MPI on two hosts

我只是一个虾纸丫 提交于 2020-01-21 02:29:08
问题 I've looked through many examples and I'm still confused. I've compiled a simple latency check program from here, and it runs perfectly on one host, but when I try to run it on two hosts it hangs. However, running something like hostname runs fine: [hamiltont@4 latency]$ mpirun --report-bindings --hostfile hostfile --rankfile rankfile -np 2 hostname [4:16622] [[5908,0],0] odls:default:fork binding child [[5908,1],0] to slot_list 0 4 [5:12661] [[5908,0],1] odls:default:fork binding child [

How to run a job array in R using the rscript command from the command line? [closed]

荒凉一梦 提交于 2020-01-20 04:26:24
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago . I am wondering how I might be able to run 500 parallel jobs in R using the Rscript function. I currently have an R file that has the header on top: args <- commandArgs(TRUE) B <- as.numeric(args[1]) Num.Cores <- as.numeric(args[2]) Outside of the R file, I wish to pass which of

Can't join Kubernetes master from nodes hosts by Vagrant

落花浮王杯 提交于 2020-01-17 07:19:47
问题 Use kubeadm to install Kubernetes cluster by Vagrant followed official guide: https://kubernetes.io/docs/getting-started-guides/kubeadm/ It was successful when install on master host: kubeadm init And generated a token: [root@localhost ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION 1eb2c2.8c9s81b32cc9937e <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. Use this token to join from nodes hosts: kubeadm join --token=1eb2c2

Unable to bind elasticsearch transport service to external interface

僤鯓⒐⒋嵵緔 提交于 2020-01-16 19:00:28
问题 I am trying to setup elasticsearch cluster with 2 virtual machines. I am not able to configure the cluster transport service with an external interface. I could able to use localhost:9300 as transport service, but I cannot use localhost URL to join the cluster. It is throwing an error when I use external interface name/IP to configure the cluster. [2017-12-22T06:58:56,979][INFO ][o.e.t.TransportService ] [node-1] publish_address {10.0.1.33:9300}, bound_addresses {10.0.1.33:9300} [2017-12

Ceph-rgw Service stop automatically after installation

不羁岁月 提交于 2020-01-16 09:39:07
问题 in my local cluster (4 Raspberry PIs) i try to configure a rgw gateway. Unfortunately the services disappears automatically after 2 minutes. [ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host OSD1 and default port 7480 cephuser@admin:~/mycluster $ ceph -s cluster: id: 745d44c2-86dd-4b2f-9c9c-ab50160ea353 health: HEALTH_WARN too few PGs per OSD (24 < min 30) services: mon: 1 daemons, quorum admin mgr: admin(active) osd: 4 osds: 4 up, 4 in rgw: 1 daemon active data:

Utilizing the power of clusters in the context of databases?

蓝咒 提交于 2020-01-15 09:20:42
问题 I have a 22 machine cluster with a common NFS mount. On each machine, I am able to start a new MySQL instance. I finished creating a table with about 71 million entries and started an ADD INDEX operation. It's been more than 12 hours and the operation is still going on. So what I logged onto one of my other machines in the cluster, started a new instance on MySQL daemon on that machine using: mysqld_safe --user=username And then created a MySQL client on the same machine to connect to the