replication

Postgres replication

本秂侑毒 提交于 2021-02-08 18:35:00
问题 Right now I have a database (about 2-3 GB) in PostgreSQL, which serves as a data storage to RoR/Python LAMP-like application. What kind tools are there that are simple and robust enough for replication of the main database to a second machine? I looked through some packages (Slony-I and etc.) but it would be great to hear real-life stories as well. Right now I'm not concerned with load balancing and etc. I am thinking about using simple Write-Ahead-Log strategy for now. 回答1: If you are not

Postgres replication

假装没事ソ 提交于 2021-02-08 18:30:52
问题 Right now I have a database (about 2-3 GB) in PostgreSQL, which serves as a data storage to RoR/Python LAMP-like application. What kind tools are there that are simple and robust enough for replication of the main database to a second machine? I looked through some packages (Slony-I and etc.) but it would be great to hear real-life stories as well. Right now I'm not concerned with load balancing and etc. I am thinking about using simple Write-Ahead-Log strategy for now. 回答1: If you are not

How can I minimize the data in a SQL replication

落爺英雄遲暮 提交于 2021-02-07 17:14:57
问题 I want to replicate data from a boat offshore to an onshore site. The connection is sometimes via a satellite link and can be slow and have a high latency. Latency in our application is important, the people on-shore should have the data as soon as possible. There is one table being replicated, consisting of an id, datetime and some binary data that may vary in length, usually < 50 bytes. An application off-shore pushes data (hardware measurements) into the table constantly and we want these

How can I minimize the data in a SQL replication

谁说我不能喝 提交于 2021-02-07 17:14:30
问题 I want to replicate data from a boat offshore to an onshore site. The connection is sometimes via a satellite link and can be slow and have a high latency. Latency in our application is important, the people on-shore should have the data as soon as possible. There is one table being replicated, consisting of an id, datetime and some binary data that may vary in length, usually < 50 bytes. An application off-shore pushes data (hardware measurements) into the table constantly and we want these

How can I minimize the data in a SQL replication

你说的曾经没有我的故事 提交于 2021-02-07 17:13:36
问题 I want to replicate data from a boat offshore to an onshore site. The connection is sometimes via a satellite link and can be slow and have a high latency. Latency in our application is important, the people on-shore should have the data as soon as possible. There is one table being replicated, consisting of an id, datetime and some binary data that may vary in length, usually < 50 bytes. An application off-shore pushes data (hardware measurements) into the table constantly and we want these

How can I minimize the data in a SQL replication

强颜欢笑 提交于 2021-02-07 17:12:40
问题 I want to replicate data from a boat offshore to an onshore site. The connection is sometimes via a satellite link and can be slow and have a high latency. Latency in our application is important, the people on-shore should have the data as soon as possible. There is one table being replicated, consisting of an id, datetime and some binary data that may vary in length, usually < 50 bytes. An application off-shore pushes data (hardware measurements) into the table constantly and we want these

Transactions between two replicating master mysql servers

我们两清 提交于 2021-02-07 09:16:00
问题 With a replicating mysql master to master database with innodb engine, if one transaction were to initiate on database A will that row lock for database B until the transaction has been committed? 回答1: The master getting the first transaction is completely separate from the second master and they communicate through a binary log. https://dev.mysql.com/doc/refman/5.7/en/replication-formats.html In the case of something requiring a transaction, then the actual statements are not written to the

Postgresql: master-slave replication of 1 table

爱⌒轻易说出口 提交于 2021-01-27 19:14:22
问题 Help me to chouse a simple (lightweight) solution to the master-slave replication of one table between two Postgresql databases. The table contains a large object. 回答1: Here you'll find a very good overview of the replication tools for PostgreSQL. Please, take a look and hopefully you'll be able to pick one. Otherwise, if you need something really lightweight, you can do it yourself. You'll need a trigger and a couple of functions, a dblink module if you need almost immediate changes

Embedded couchDB

半世苍凉 提交于 2021-01-27 03:50:45
问题 CouchDB is great, I like its p2p replication functionality, but it's a bit larger(because we have to install Erlang) and slower when used in desktop application. As I tested in intel duo core cpu, 12 seconds to load 10000 docs 10 seconds to insert 10000 doc, but need 20 seconds to update view, so total is 30 seconds Is there any No SQL implementation which has the same p2p replication functionality, but the size is very small like sqlite, and the speed is quite good(1 second to load 10000