replication

How to prefer reads on secondaries in MongoDb

三世轮回 提交于 2019-12-04 07:41:06
When using mongodb in a replica set configuration (1 arbiter, 1 primary, 2 slaves); how do I set a preference that read be performed against the secondaries and leave the primary only for writes? I'm using MongoDb 2.0.4 with Morphia. I see that there is a slaveOk() method, but I'm not sure how that works. Morphia http://code.google.com/p/morphia/ Details My Mongo is set with the following options: mongo.slaveOk(); mongo.setWriteConcern(WriteConcern.SAFE); I am attempting to use the following (this may be answer -btw): Datastore ds = getDatastore(); Query<MyEntity> query = ds.find(MyEntity

My subscriber database lost connection to the publisher and expired. Can my data be saved?

北慕城南 提交于 2019-12-04 05:07:12
问题 I have a publisher database A and I have two subscriber databases B and C that subscribe to A. My application resides locally at sites B and C and through replication, changes at B and/or C are replicated to each other. The problem is since 31 January 2019 C stopped subscribing to A and the IT guys at site C didn't know about it (no alerts). The bigger problem is that during this time, people using the application at B have been entering data which is replicated back to A. At the same time,

Does SQL Azure automatically geo-replication automatically failover?

 ̄綄美尐妖づ 提交于 2019-12-04 05:06:19
We have a geo-replicated database in SQL Azure (Premium) and are wondering if we are pointing to the South Central US database that is the master, if that goes down do we have to manually change our connection strings in our code (C# .Net / Entity Framework 6) to point to the new database in say North US? We are looking for a way to have a single connection string and then Azure do the under the covers to now point to the new database if the master ever goes down. Is that possible? Update on method followed: so I read this that we have to manually go into a web.config file on a production

How should CouchDB revisions be treated from a design perspective?

北城余情 提交于 2019-12-04 04:24:36
问题 Near as I can tell, CouchDB revisions are not to be treated like revisions in the document versioning sense of the word. From glancing at other posts, they seem to be regarded as transient data that exists until a coarse-grained compact operation is called. My question is, if I am interested in using CouchDB to maintain documents, as well as a version-history of those documents, should I allow that to be handled natively by CouchDB revisions, or should I build a layer on-top that will survive

Ehcache / Hibernate and RMI replication with large number of entities

让人想犯罪 __ 提交于 2019-12-04 03:41:23
I'm currently investigating how to use the RMI distribution option in ehcache. I've configured properly ehcache.xml and replication seems to work fine. However I've 2 questions: -> It seems ehcache/ hibernate creates 1 cache per Entity. This is fine, however when replication is in place it create 1 thread / cache to replicate. Is this the intended behavious ? As our domain is big, it creates about 300 threads, which seems to me really big -> Another nasty consequence is that the heartbeat messagre seems to aggregate all of those cache names. From what I saw the message should fit in 1500 bytes

Cassandra replication system - how it works

柔情痞子 提交于 2019-12-03 20:40:23
Does Cassandra replicate only on write procedure (with chosen consistency level)? Is there any auto-replicate option for absent nodes, if I want symmetric data in every node? If I plug in a new node to the cluster, there is no auto replication. How can I sync data from other nodes with the new one? If I want replication like multimaster (2 nodes) with slave backup (1 node) known from MySQL, what is the proper way to set up and manage that on Cassandra (3 nodes)? How about two nodes? Cassandra replicates on writes, yes, but it also uses Hinted Handoff , Read Repair and Anti Entropy to to reduce

MS-SQL Server 2005: Initializing a merge subscription with alternate snapshot location

我的梦境 提交于 2019-12-03 20:09:55
We started some overseas merge replication 1 year ago and everything is going fine till now. My problem is that we have now so much data in our system that any crash on one of the subscriber's servers will be a disaster: reinitialising a subscription the standard way will take days (our connexions are definitely slow, but already very very expensive)! Among the ideas I have been following up are the following: make a copy of the original database, freeze it, send the files by plane to the subscriber, and initiate replication without snapshot: this is something that was done traditionnaly with

MySQL Binary Log Replication: Can it be set to ignore errors?

亡梦爱人 提交于 2019-12-03 17:01:38
问题 I'm running a master-slave MySQL binary log replication system (phew!) that, for some data, is not in sync (meaning, the master holds more data than the slave). But the slave stops very frequently on the slightest MySQL error, can this be disabled? (perhaps a my.cnf setting for the replicating slave ignore-replicating-errors or some of the sort ;) ) This is what happens, every now and then, when the slave tries to replicate an item that does not exist, the slave just dies. a quick check at

Which database has the best support for replication

放肆的年华 提交于 2019-12-03 16:48:29
问题 I have a fairly good feel for what MySQL replication can do. I'm wondering what other databases support replication, and how they compare to MySQL and others? Some questions I would have are: Is replication built in, or an add-on/plugin? How does the replication work (high-level)? MySQL provides statement-based replication (and row-based replication in 5.1). I'm interested in how other databases compare. What gets shipped over the wire? How do changes get applied to the replicas? Is it easy

Selective replication with CouchDB

自古美人都是妖i 提交于 2019-12-03 16:23:32
问题 I'm currently evaluating possible solutions to the follwing problem: A set of data entries must be synchonized between multiple clients, where each client may only view (or even know about the existence of) a subset of the data. Each client "owns" some of the elements, and the decision who else can read or modify those elements may only be made by the owner. To complicate this situation even more, each element (and each element revision) must have an unique identifier that is equal for all