cluster-computing

Create on-premise Service Fabric cluster fails with exception

流过昼夜 提交于 2019-12-11 13:49:58
问题 I am trying to follow the instructions at https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-for-windows-server/#plan-and-prepare-for-cluster-deployment to create a dev cluster on a test machine. I am using the sample ClusterConfig.Unsecure.DevCluster.json file. However creation failed with following exception: Create Cluster failed with exception: System.AggregateException: One or more errors occurred. ---> System.NullReferenceE xception: Object

In a Tomcat cluster, how to share beans in an application?

左心房为你撑大大i 提交于 2019-12-11 12:27:29
问题 This might sound like a dumb or a simple question, but I really have little to no experience with clustering of any kind and I'm just curious if and how a certain scenario is possible. Let's say I've set up a cluster of N Tomcat instances, and I've deployed my application App1 across all N instances. What would I need to do to be able to have certain beans in the application - not all, but some - be "shared" across the cluster? i.e., if I had a bean for WebsiteSettings , I'd like to have some

How to secure an Apache Ignite cluster

泪湿孤枕 提交于 2019-12-11 12:26:37
问题 How can I provide an authentication for my Apache ignite cluster. Basically I'm looking for setting username and password. Otherwise allowing list of trusted(white listed) clients is also fine. 回答1: This can be implemented by your own: https://apacheignite.readme.io/docs/advanced-security or you can use 3rd party-ready solutions: https://docs.gridgain.com/docs/security-and-audit 回答2: Apache Ignite does not provide these kinds of security capabilities with its open-source version. As mentioned

Spark web UI unreachable

萝らか妹 提交于 2019-12-11 11:46:31
问题 i have installed spark2.0.0 on 12 nodes (in cluster standalone mode), when i launch it i get this : ./sbin/start-all.sh starting org.apache.spark.deploy.master.Master, logging to /home/mName/fer/spark-2.0.0-bin-hadoop2.7/logs/spark-mName-org.apache.spark.deploy.master.Master-1-ibnb25.out localhost192.17.0.17: ssh: Could not resolve hostname localhost192.17.0.17: Name or service not known 192.17.0.20: starting org.apache.spark.deploy.worker.Worker, logging to /home/mbala/fer/spark-2.0.0-bin

How to set up Spark cluster on Windows machines?

不打扰是莪最后的温柔 提交于 2019-12-11 08:48:02
问题 I am trying to set up a Spark cluster on Windows machines. The way to go here is using the Standalone mode, right? What are the concrete disadvantages of not using Mesos or YARN? And how much pain would it be to use either one of those? Does anyone have some experience here? 回答1: FYI, I got an answer in the user-group: https://groups.google.com/forum/#!topic/spark-users/SyBJhQXBqIs The standalone mode is indeed the way to go. Mesos does not work under Windows and YARN probably neither. 回答2:

JMS durable subscriber in a cluster with multiple instances

末鹿安然 提交于 2019-12-11 07:34:02
问题 I am going to be using Payara BTW.... Suppose I have: A JMS Topic An MDB configured as a durable topic subscriber Multiple instances of the MDB are deployed across the cluster and they are all using the same client ID value to make the durable subscription. If this is the scenario, and given the way client ID values and durable subscriptions work, is it correct to say that only 1 of the MDB instances across the cluster will succeed in connecting and the others will fail? Thanks! Suppose you

Spark 2.2 Sort fails with huge dataset

百般思念 提交于 2019-12-11 07:33:47
问题 I am facing an issue when sorting a huge dataset ( 1.2 T ) based on 4 columns. I also need right after the sort, to partition this dataset when writing the final dataset in HDFS ,based on one of the columns used in the sort function. Here is a stackoverflow post I posted a few days ago describing an other issue I had with the same code but with regards to joining two datasets : previous issue I used the answer of this post to improve my code. Now the join works fine. I tested the code without

Struts2 portlet NotSerializable exception

China☆狼群 提交于 2019-12-11 06:47:13
问题 I'm currently experiencing problems implementing jsr 168 portlets inside of a clustered environment using struts2 with the portlet plugin. Whenever i use the model driven interface and submit the form i recieve the below stack trace: SEVERE: Unable to serialize delta request for sessionid [0F246549355FD6749A5CF6EAE761F77F.worker1] java.io.NotSerializableException: com.opensymphony.xwork2.inject.ContainerImpl$ConstructorInjector at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream

How to collect Hadoop Cluster Size/Number of Cores Information

牧云@^-^@ 提交于 2019-12-11 06:28:29
问题 I am running my hadoop jobs on a cluster consisting of multiple machines whose sizes are not known (main memory, number of cores, size etc.. per machine). Without using any OS specific library (*.so files I mean), is there any class or tools for hadoop in itself or some additional libraries where I could collect information like while the Hadoop MR jobs are being executed: Total Number of cores / number of cores employed by the job Total available main memory / allocated available main memory

Cluster Failover

橙三吉。 提交于 2019-12-11 06:20:00
问题 I know I'm asking something very obvious about cluster failover. I read on redis.io that, if any master cluster node fails it will affect to other master nodes until slave come to take in charge. In my structure, I'm not defining any slave and just working with 3 masters. I'm thinking to modify the redis-trib.rb file, which will remove the defected server and will start the cluster with other 2 nodes. I'm confused about a couple of things, 1) Resharding Could not possible until failed server