replication

mongoDB replica set

断了今生、忘了曾经 提交于 2019-12-24 07:07:34
问题 I am trying to understand the concept of replica sets in MongoDB. Taking a simple example of 2 mongoDB instances A (primary) and B (secondary). If my client is happily querying A I understand that writes get replicated to B but what happens if server A becomes inaccessible? Whilst in terms of mongo replication I can see that B gets elected as the new primary, how does the client know to now channel its queries to B and not A? Is this all done internally to mongo? I ask because my client's

pymongo replication secondary readreference not work

走远了吗. 提交于 2019-12-24 03:50:49
问题 we have MongoDB 2.6 and 2 Replica Set, and we use pymongo driver and connect Mongo Replicat Set with the following url mongodb://admin:admin@127.0.0.1:10011:127.0.0.1:10012,127.0.0.1:10013/db?replicaSet=replica with python code from pymongo import MongoClient url = 'mongodb://admin:admin@127.0.0.1:10011:127.0.0.1:10012,127.0.0.1:10013/db?replicaSet=replica' db = 'db' db = MongoClient( url, readPreference='secondary', secondary_acceptable_latency_ms=1000, )[db] db.test.find_one() # more read

Raven DB Replication Setup Issue

柔情痞子 提交于 2019-12-24 03:07:34
问题 Can any one help me in setting up Raven DB Replication? Tried a lot of ways a lot of time but no success. Here is the story: 1) I downloaded the raven bundle. Make a copy of it. Run Raven.Server.Exe from both of the folders. Both instances run successfully on individual port. Then I created a document with name as "Raven/Replication/Destinations" and document as { "Destinations": [{"Url":"http://vishal-pc:8081"}], "Id": "Raven/Replication/Destinations" } But it's not working. Please some one

mongodb recovery removed records

微笑、不失礼 提交于 2019-12-24 01:24:25
问题 I have a two-member replica set, I accidentally removed all documents in an collection, I am not sure how I did this, but it's gone. Is it possible to get all the data back? 回答1: Unless you have a backup (always recommended for just this type of thing), or one of the replicas is using slavedelay, then I am afraid the removal of the records is final. You might have been able to force a shutdown in time to save the data on-disk if you killed the process before the next fsync to disk (similarly

offline limited multi-master in Postgres

人盡茶涼 提交于 2019-12-23 20:33:31
问题 Site A will be generating a set of records. Nightly they will backup their database and ftp it to Site B. Site B will not be modifying those records at all, but will be adding more records and other tables will be creating FK's to Site A's records. So, essentially, I need to setup a system to take all the incremental changes from Site A's dump (mostly inserts and updates, but some deletes possible) and apply them at Site B. At this point, we're using Postgres 8.3, but could upgrade if

How do you track the time of replicated rows for Subscribers in SQL Server 2005?

落爺英雄遲暮 提交于 2019-12-23 16:35:46
问题 The basic problem is like this: A subscriber has successfully replicated a row from the publisher, using transactional replication. Now, how do we keep track the time of this row being last successfully replicated? A friend has suggested the following solution, which he used for his SQL Server 2000: 1) Add a datetime column. 2) Change the replication stored procedure to update the datetime column (!). The step #2 sets off all sorts of warning bells within me, so I'm asking if there are better

Replicate tables from different database of same mysql server

橙三吉。 提交于 2019-12-23 12:33:53
问题 I have one server with 2 databases, and i want to replicate several tables from one database to another. Purpose is that we uses same user's table that used in projects. As in anothers tables used InnoDB with foreign keys to users table i've chosen a replication way. For that I made the changes for my.cnf master-user=root server-id = 2 replicate-rewrite-db = dou->jobs replicate-do-table = jobs.auth\_user replicate-wild-do-table = jobs.geo\_% replicate-do-table = jobs.user\_profile replicate

Using pymongo's ReplicaSetConnection: sometimes getting “IndexError: no such item for Cursor”

旧巷老猫 提交于 2019-12-23 03:39:14
问题 I started using pymongo's (version 2.2.1) ReplicaSetConnection object instead of the pymongo.Connection object. Now, when I perform reads from the database, like: if cur.count() == 0: raise NoDocumentsFound(self.name, self.COLLECTION_NAME) elif cur.count() > 1: raise TooManyDocumentsFound(self.name, self.COLLECTION_NAME) cur.rewind() rec = cur[0] I sometimes receive an " IndexError: no such item for Cursor instance " on the final line. From all I can find out about this error, it should occur

Is two way sync between gerrit and github.com possible?

人走茶凉 提交于 2019-12-22 22:47:50
问题 For a project exisitng in github.com private repository, I am setting up gerrit code review. I am using the gerrit's replication plugin to keep the gerrit repository in sync with github.com. But if someone commits (say commit-a ) and pushes directly to github.com, the commit-a is overwritten in github.com, when gerrit does the replication process (because, it replicates only the things in gerrit mirror). But I want to implement a 2-way sync. Something like, whenever a push is made to gerrit,

SQL Server replication without deletes?

馋奶兔 提交于 2019-12-22 18:05:09
问题 Is there a way to replicate a sql server database but not push out deletes to the subscribers? 回答1: Do this....Drop the article. Create a new storedprocedure in the corresponding database that mimicks the system store procedure (sp_del...) and contains the same parameter but does nothing. Add the article again...and set the delete store procedure under the article's properties to the new delete stored procedure that you created.... Or you can select Do not replicate Delete Statements....I