paxos

Why is multi-paxos called multi-paxos?

别说谁变了你拦得住时间么 提交于 2019-12-05 09:45:10
Why multi-paxos is called multi-paxos? I can't see how it is "multi". It's about multiple rounds of the algorithm to agree sequential requests from a stable leader with minimal messaging. Initially with no recognised leader you must run at least one round of basic Paxos where a candidate leader sends a prepare request (using the terminology of the paper Paxos Made Simple ). Positive responses from a majority confirm it as leader. It then sends accept messages for that round which terminates successfully if you get a majority of accept acknowledgements. Rather than start again with prepare

Raft 与 Paxos的区别

為{幸葍}努か 提交于 2019-12-05 00:40:01
Raft Raft概述 Raft一致性算法用于保证在分布式的条件下,所有的节点可以执行相同的命令序列,并达到一致的状态。这类的问题可以归结为“Replicated state machines”问题。 Raft一致性算法的保证 Raft特点 相比于Paxos,Raft最大的特点就是可理解性。相信读过Paxos论文的人应该对此深有体会。 Raft把一致性问题,分解成三个比较独立的子问题,并给出每个子问题的解决方法: 选举:描述Raft是如何选择一个leader的,这个部分很受容易理解了。 日志复制:描述Raft的leader是如何把日志复制到集群的各个节点上的。 安全性:描述Raft是如何保证“State Machine Safety Property”。 参考 官方资源 (包括了论文、各个语言的实现、一些学习视频) 根据Raft论文整理的一个中文文章 一个概括性的中文PPT 中文翻译 Paxos 概述 Paxos 协议是一个解决分布式系统中,多个节点之间就某个值(提案)达成一致(决议)的通信协议 。它能够处理在少数派离线的情况下,剩余的多数派节点仍然能够达成一致 paxos两阶段提交 总体说来,paxos就是通过两个阶段确定一个决议: Phase1:确定谁的编号最高,只有编号最高者才有权利提交proposal; Phase2:编号最高者提交proposal

multi paxos协议

廉价感情. 提交于 2019-12-04 09:28:28
不知道有没有人和一样,看完paxos协议之后,再看zab协议,感觉两个实际上并木有什么关系。如果有,那是因为你漏掉了multi paxos协议,它实际上才是能真正将paxos协议用于生产中的。 先说活锁,如果有n个proposer,他们要发起提案就难免这样的场景。acceptor先应答prepare proposer的1版本,当proposer美滋滋发起accept的时候,acceptor告诉它。抱歉,我又prepare了别的proposer的3版本。卧槽,我裤子都脱了,你又答应别人了,不行,不争馒头还要争口气,那我把我的版本号加高,这样,就进入了一个恶性循环中去,大家都在拼命地加高版本号,就像。。。今年双十一淘宝的盖楼活动一样。所以,这样的paxos协议是没法用于生产的。 当然,我们的Lamport大佬不会整这么一个没用的东西。其实问题很好解决,只有一个proposer就行了,其它的在旁边看着。这个就是multi paxos算法了。那么接下来的问题就是,谁来当这个唯一的proposer,也就是leader。 multi-paxos并没有一个显式的选主的过程,其实只是在basic paxos过程中稍加改动而已,分为以下阶段: 一、proposer进行prepare,获得半数以上支持的,就自认为leader 二、这个阶段,会有多个自以为是的leader,但是真leader只能有一个

分布式架构的一致性

与世无争的帅哥 提交于 2019-12-04 08:41:41
Paxos Paxos算法是Leslie Lamport在1990年提出的一种基于消息传递的一致性算法。由于算法难以理解,起初并没有引起大家的重视,Lamport在1998年将论文重新发表到TOCS上,即便如此Paxos算法还是没有得到重视,2001年Lamport用可读性比较强的叙述性语言给出算法描述。 06年Google发布了三篇论文,其中在Chubby锁服务使用Paxos作为Chubby Cell中的一致性算法,Paxos的人气从此一路狂飙。 基于Paxos协议的数据同步与传统主备方式最大的区别在于:Paxos只需超过半数的副本在线且相互通信正常,就可以保证服务的持续可用,且数据不丢失。 Basic-Paxos Basic-Paxos解决的问题:在一个分布式系统中,如何就一个提案达成一致。 需要借助两阶段提交实现: Prepare阶段: Proposer选择一个提案编号n并将prepare请求发送给 Acceptor。 Acceptor收到prepare消息后,如果提案的编号大于它已经回复的所有prepare消息,则Acceptor将自己上次接受的提案回复给Proposer,并承诺不再回复小于n的提案。 Accept阶段: 当一个Proposer收到了多数Acceptor对prepare的回复后,就进入批准阶段

Differences between OT and CRDT

蓝咒 提交于 2019-12-04 07:28:36
问题 Can someone explain me simply the main differences between Operational Transform and CRDT? As far as I understand, both are algorithms that permits data to converge without conflict on different nodes of a distributed system. In which usecase would you use which algorithm? As far as I understand, OT is mostly used for text and CRDT is more general and can handle more advanced structures right? Is CRDT more powerful than OT? I ask this question because I am trying to see how to implement a

Programming language to choose for implementing distributed message passing algorithms

拥有回忆 提交于 2019-12-03 14:21:52
Basically, I would want to implement the following algorithms and analyze how the system built using these algorithms behave under different conditions. Gossip protocol Multiple paxos Consistent hashing My interest here is in these algorithms. I basically am looking for a programming language that lets me write these algorithms quickly and deeply understand these algorithms. Which language should I choose? Java, Scala, Erlang or anything else. Currently, I know Java and C++. You could try implementing the protocols in Erlang. Process communication is very elegantly baked into the language and

分布式一致性模型

自古美人都是妖i 提交于 2019-12-03 12:07:49
一致性模型 弱一致性 最终一致性 DNS (Domain Name System) Gossip (Cassandra的通信协议) 强一致性 同步 Paxos Raft (multi-paxos) ZAB (multi-paxos) 强一致性要解决的的问题 数据不能存在单点上(安全) 分布式系统对fault tolorence 的一般解决方案是state machine replication(状态机复制) state machine replication 的 共识(consensus) 算法。 paxos 其实是一个共识算法 系统的最终一致性, 不仅需要达成共识, 还会取决于 client的行为。 假设x为命令 Client --(x)--> Consensus Module x stored in that server's own log(x 存储在该服务器自己的日志中) Consensus Module --(x)--> Other servers each of them records the command in their own log(每台服务器都将x存储到自己的日志中) 强一致性算法: 主从同步 Master 接受写请求 Master复制日志到slave Master等待, 直到所有从库返回 可能出现的问题: 一个节点失败, Master阻塞,

Cannot use as type in assignment in go

匿名 (未验证) 提交于 2019-12-03 08:44:33
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: when I compile my code, I get the following error message, not sure why it happens. Can someone help me point why? Thank you in advance. cannot use px.InitializePaxosInstance(val) (type PaxosInstance) as type *PaxosInstance in assignment type Paxos struct { instance map[int]*PaxosInstance } type PaxosInstance struct { value interface{} decided bool } func (px *Paxos) InitializePaxosInstance(val interface{}) PaxosInstance { return PaxosInstance {decided:false, value: val} } func (px *Paxos) PartAProcess(seq int, val interface{}) error { px

Leader election for paxos-based replicated key value store

雨燕双飞 提交于 2019-12-03 08:29:48
I am going to implement a key value store with multi Paxos. I would have several nodes, one of which is the primary node. This primary node receive update requests and replicate values to slave nodes. My question is how the primary node (or leader) is selected? Can I still use the Paxos algorithm? If so, do you think it is necessary to abstract the paxos implementation to a single unit that could be used not only by the replication unit but also the leader election unit? If I use the node with the least id to be the leader? How can I implement the master lease? Thanks for any answers. Before I

When to use Paxos (real practical use cases)?

别来无恙 提交于 2019-12-03 03:59:11
问题 Could someone give me a list of real use cases of Paxos. That is real problems that require consensus as part of a bigger problem. Is the following a use case of Paxos? Suppose there are two clients playing poker against each other on a poker server. The poker server is replicated. My understanding of Paxos is that it could be used to maintain consistency of the inmemory data structures that represent the current hand of poker. That is, ensure that all replicas have the exact same inmemory