distributed

When to use Paxos (real practical use cases)?

☆樱花仙子☆ 提交于 2019-12-02 16:19:18
Could someone give me a list of real use cases of Paxos. That is real problems that require consensus as part of a bigger problem. Is the following a use case of Paxos? Suppose there are two clients playing poker against each other on a poker server. The poker server is replicated. My understanding of Paxos is that it could be used to maintain consistency of the inmemory data structures that represent the current hand of poker. That is, ensure that all replicas have the exact same inmemory state of the hand. But why is Paxos necessary? Suppose a new card needs to be dealt. Each replica running

What algorithms there are for failover in a distributed system?

空扰寡人 提交于 2019-12-02 14:12:48
I'm planning on making a distributed database system using a shared-nothing architecture and multiversion concurrency control . Redundancy will be achieved through asynchronous replication (it's allowed to lose some recent changes in case of a failure, as long as the data in the system remains consistent). For each database entry, one node has the master copy (only that node has write access to it), in addition to which one or more nodes have secondary copies of the entry for scalability and redundancy purposes (the secondary copies are read-only). When the master copy of an entry is updated,

Real World Use of Zookeeper [closed]

一个人想着一个人 提交于 2019-12-02 13:49:40
I've been looking at Zookeeper recently and wondered whether anybody was using it currently and what they were specifically using it for storing. The most common use case is for configuration information, but what kind of data and how much data are you storing? The Apache CXF implementation of DOSGi uses zookeeper for its service registration repository. Individual containers have a distributed software (dsw) bundle that listens for all service events and when a service status changes that has a property indicating distribution. The dsw talks to the discovery bundle which, in the reference

FileNotFoundException when using Hadoop distributed cache

走远了吗. 提交于 2019-12-02 13:14:30
this time someone should please relpy i am struggling with running my code using distributed cahe. i have already the files on hdfs but when i run this code : import java.awt.image.BufferedImage; import java.awt.image.DataBufferByte; import java.awt.image.Raster; import java.io.BufferedReader; import java.io.ByteArrayInputStream; import java.io.DataInputStream; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URISyntaxException; import java.util.logging.Level; import java.util.logging.Logger; import

Jmeter Distributed Testing using Java Code

此生再无相见时 提交于 2019-12-02 11:59:58
i am able to run Jmeter using Java Code, but if i want to do the same as Distributed Testing then how do i add the remote engines in the Java Code. Here's a sample code to start a remote engine from Java code. Documentation about remote testing with JMeter And a sample to code a local test. 来源: https://stackoverflow.com/questions/36690957/jmeter-distributed-testing-using-java-code

What is the point of Spawn(Node, Fun) on erlang, if Node has to have the same module loadable as a client node?

帅比萌擦擦* 提交于 2019-12-01 19:59:59
问题 Why create illusion that you are sending a Fun to remote node to execute in a new process? If client node has to have same module loadable with the Fun defined as a server node anyway. Why not only spawn(Node, M, F, A) then, which makes it clear that you are sending a definition of a function call, not Fun itself. 回答1: Let's consider two possible cases Functions referring to module functions Fun = fun file:getcwd/0, erlang:spawn(Node, Fun). In this case Fun indeed should be loadable at the

create new core directories in SOLR on the fly

こ雲淡風輕ζ 提交于 2019-12-01 15:37:50
i am using solr 1.4.1 for building a distributed search engine, but i dont want to use only one index file - i want to create new core "index"-directories on the fly in my java code. i found following rest api to create new cores using an EXISTING core directory ( http://wiki.apache.org/solr/CoreAdmin ). http://localhost:8983/solr/admin/cores?action=CREATE&name=coreX&instanceDir=path_to_instance_directory&config=config_file_name.xml&schema=schem_file_name.xml&dataDir=data is there a way to create a new core without an extisting core directory? has solr such a function? via rest or in the solrj

Distributed tensorflow with multiple gpu

柔情痞子 提交于 2019-12-01 14:53:14
It seems that tf.train.replica_device_setter doesn't allow specify gpu which work with. What I want to do is like below: with tf.device( tf.train.replica_device_setter( worker_device='/job:worker:task:%d/gpu:%d' % (deviceindex, gpuindex)): <build-some-tf-graph> If your parameters are not sharded, you could do it with a simplified version of replica_device_setter like below: def assign_to_device(worker=0, gpu=0, ps_device="/job:ps/task:0/cpu:0"): def _assign(op): node_def = op if isinstance(op, tf.NodeDef) else op.node_def if node_def.op == "Variable": return ps_device else: return "/job:worker

Distributed tensorflow with multiple gpu

耗尽温柔 提交于 2019-12-01 14:00:05
问题 It seems that tf.train.replica_device_setter doesn't allow specify gpu which work with. What I want to do is like below: with tf.device( tf.train.replica_device_setter( worker_device='/job:worker:task:%d/gpu:%d' % (deviceindex, gpuindex)): <build-some-tf-graph> 回答1: If your parameters are not sharded, you could do it with a simplified version of replica_device_setter like below: def assign_to_device(worker=0, gpu=0, ps_device="/job:ps/task:0/cpu:0"): def _assign(op): node_def = op if

Distributed System

坚强是说给别人听的谎言 提交于 2019-12-01 12:23:21
I am looking to create a distributed framework in Java and need some help sorting out the implementation of a client/manager/worker situation as described in my pseudocode below. Manager BEGIN WHILE(true) RECEIVE message FROM client IF (worker_connections > 0) THEN FOR (i=0;i<worker_connections;i++) SEND message TO worker[i] FOR (i=0;i<worker_connections;i++) RECIEVE result[i] FROM worker[i] SEND merge(result[]) TO client ELSE SEND "No workers available" TO client END IF END WHILE END Client BEGIN RECEIVE message FROM user SEND message TO manager RECEIVE message FROM manager END Worker BEGIN