distributed-computing

Does std::hash give same result for same input for different compiled builds and different machines?

杀马特。学长 韩版系。学妹 提交于 2020-07-20 08:02:35
问题 I have some random test parameters for which I need to calculate a hash to detect if I ran with same parameters. I might run the test using the same source recompiled at a different time or run on a different machine. Even so I want to detect whether the same parameters were used for the run. Does std::hash give the same result for the same input for different compiled builds and different machines? e.g. std::hash<string>{}("TestcaseParamVal0.7Param0.4"); Will this always be a unique number?

how raft follower rejoin after network disconnected?

谁说我不能喝 提交于 2020-07-08 19:46:17
问题 I have a problem on raft. In paper "In Search of an Understandable Consensus Algorithm(Extended Version)" it says: To begin an election, a follower increments its current term and transitions to candidate state. (in section 5.2) and it also says: reciever should be "Reply false if args.term < currentTerm" in AppendEntries RPC and RequestVot RPC so, let's think this scene, there are 5 machine in raft system, and now machine 0 is leader, machine 1 to 4 is follower, now is term 1. Suddenly,

how raft follower rejoin after network disconnected?

烈酒焚心 提交于 2020-07-08 19:41:32
问题 I have a problem on raft. In paper "In Search of an Understandable Consensus Algorithm(Extended Version)" it says: To begin an election, a follower increments its current term and transitions to candidate state. (in section 5.2) and it also says: reciever should be "Reply false if args.term < currentTerm" in AppendEntries RPC and RequestVot RPC so, let's think this scene, there are 5 machine in raft system, and now machine 0 is leader, machine 1 to 4 is follower, now is term 1. Suddenly,

How does Zookeeper manage node roles in other clusters?

你离开我真会死。 提交于 2020-07-08 11:59:31
问题 My understanding is that Zookeeper is often used to solve the problem of "keeping track of which node plays a particular role" in a distributed system (e.g. master node in a DB or in a MapReduce cluster, etc). For simplicity, say we have a DB with one master and multiple replicas and the current master node in the DB goes down. In this scenario, one would, in principle, make one of the replica nodes a new master node. At this point my understanding is: If we didn't have Zookeeper The

What's the benefit of advanced master election algorithms over bully algorithm?

主宰稳场 提交于 2020-05-15 04:18:28
问题 I read how current master election algorithms like Raft, Paxos or Zab elect master on a cluster and couldn't understand why they use sophisticated algorithms instead of simple bully algorithm. I'm developing a cluster library and use UDP Multicast for heartbeat messages. Each node joins a multicast address and also send datagram packets periodically to that address. If the nodes find out there is a new node that sends packets to this multicast address, the node is simply added to cluster and

Is Redis' set command an atomic operation?

早过忘川 提交于 2020-05-12 11:47:07
问题 I'm trying to use Redis' set command to implement a simplest distributed lock component, but I can't find any exact basis about atomicity through the official document, is Redis' SET key value [EX seconds] [PX milliseconds] [NX|XX] command an atomic operation? 回答1: Yes. The core is single threaded, so nothing will run until the SET has completed; that makes SET {key} {value} EX {expiry} NX ideal for simple locking. 来源: https://stackoverflow.com/questions/43259635/is-redis-set-command-an

When do I use a consensus algorithm like Paxos vs using a something like a Vector Clock?

被刻印的时光 ゝ 提交于 2020-04-08 19:03:17
问题 I've been reading a lot about different strategies to guarantee consistency between nodes in distributed systems, but I'm having a bit of trouble figuring out when to use which algorithm. With what kind of system would I use something like a vector clock? Which system is ideal for using something like Paxos? Are the two mutually exclusive? 回答1: There's a distributed system of 2 nodes that store data. The data is replicated to both nodes so that if one node dies, the data is not lost