replication

Why mysql INSERT … ON DUPLICATE KEY UPDATE can break RBR replication on a master / master configuration

泄露秘密 提交于 2019-12-05 16:42:39
here is the problem: 2 MySQL 5.5 servers Row based replications + master master Writes are on both servers (both active) autoinc trick (1 server odd, the other one even) I have a table like byUserDailyStatistics: id (PK + AUTO INC) date idUser metric1 metric2 UNIQUE(idUser, date) All requests are INSERT INTO byEmailDailyStatistics (date, idUser, metric1, metric2) VALUES (:date, :user:, 1, 1) ON DUPLICATE KEY UPDATE metric1 = metric1 + 1, metric2 = metric2 +1 And sometimes, the replication breaks with message like could not execute Write_rows event on table stats.byUserDailyStatistics;

MySQL Replication & Triggers

烈酒焚心 提交于 2019-12-05 15:23:06
I have stumbled upon an interesting MySQL Error message that I do not really know how to interpret. The setup: There are two tables A and B. When data is written or updated in the table A, then a trigger is writing data to the table B. Operations happen on a Master database Data is replicated to a slave server Now, whenever I decide to update data in table A, then it is updated and the corresponding log message is written to table B. MySQL, however, spawns the following error message: Note: #1592 Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT.

Solution for optimistic object replication between Java server and browser clients?

半世苍凉 提交于 2019-12-05 15:16:58
I'm building a system where multiple users need to create, view and modify a set of objects concurrently. The system is planned to run on a Java server and modern browser clients (I get to pick which ones). It needs to be robust in face of network and server outages, the user interface must not block for modifications, modifications need to be stored locally and published when connectivity returns. Under normal operation changes should replicate with sub-second latency. Network latency and bandwidth, cpu resources are unlikely to be big issues, scale is on the order of tens to hundreds of

“[conn557392] killcursors: found 0 of 1” in Mongodb replication primary log

谁都会走 提交于 2019-12-05 13:46:46
I am running a production mongodb replication with version 2.6 now. Today I found out that the primary mongod instance keeps writing log: [conn557392] killcursors: found 0 of 1 And I checked the db.serverStatus().metrics.cursor, there are indeed a great amount of timedout cursor as mentioned in this discussion . My questions are Because I set all my read logic as Secondary preferred, the primary suppose to be write only. Why it needs to kill the cursor? The cursor suppose to be only for read? Why all the application services are not influenced even though there are more than half million of

Does mySQL replication have immediate data consistency?

最后都变了- 提交于 2019-12-05 13:46:41
问题 I am considering a noSQL solution for a current project, but I'm hesitant about the 'eventual consistency' clause in many of these databases. Is eventual consistency different than dealing with a mySQL database where replication lags? One solution I have used in the past with lagging replication is to read from the master when immediate data consistency is needed. However, I am confused then as to why relational database claim to have strong data consistency. I guess I should use transactions

Django 1.8 Migration with Postgres BDR 9.4.1

微笑、不失礼 提交于 2019-12-05 13:13:35
I am trying to run Django migrations on a Postgres database with BDR. python manage.py makemigrations works fine, but running python manage.py migrate results in the following error: ALTER TABLE … ALTER COLUMN TYPE … may only affect UNLOGGED or TEMPORARY tables when BDR is active; auth_permission is a regular table The offending module is django/django/contrib/auth/migrations/0002_alter_permission_name_max_length.py . I am not finding anything on how to UNLOGGED tables using Django, especially, since auth_permissions is a Django table (not created by me). I am also not sure if UNLOGGED tables

Cassandra - Select without replication

余生长醉 提交于 2019-12-05 11:55:46
Lets say I've created a keyspace and table: CREATE KEYSPACE IF NOT EXISTS keyspace_rep_0 WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 0}; CREATE TABLE IF NOT EXISTS some_table ( some_key ascii, some_data ascii, PRIMARY KEY (some_key) ); I don't want any replica of this data. I can insert into this table with consistency level ANY . But I couldn't select any data from this table. I got the following errors when querying with consistency levels ANY and ONE , respectively: message="ANY ConsistencyLevel is only supported for writes" message="Cannot achieve consistency level

MongoDB Replica Set: Disk size difference in Primary and Secondary Nodes

烂漫一生 提交于 2019-12-05 07:35:20
I just did the mongodb replica set configuration and all looks good. All data moved to secondary nodes properly. But when I looked at the data directory, I can see Primary have ~140G of data and at the same time secondary has only ~110G. Did anyone come across this kind of issue while setting up the Replica Set. Is that something normal behavior? When you do an initial sync from scratch on a secondary, it writes all the data fresh. This removes padding, empty space (deleted data) etc. As a result, in that respect it is similar to running a repair. If you ran a repair on the primary (blocking

MS-SQL Server 2005: Initializing a merge subscription with alternate snapshot location

▼魔方 西西 提交于 2019-12-05 05:14:29
问题 We started some overseas merge replication 1 year ago and everything is going fine till now. My problem is that we have now so much data in our system that any crash on one of the subscriber's servers will be a disaster: reinitialising a subscription the standard way will take days (our connexions are definitely slow, but already very very expensive)! Among the ideas I have been following up are the following: make a copy of the original database, freeze it, send the files by plane to the

MySQL: Writing to slave node

放肆的年华 提交于 2019-12-05 05:00:10
Lets say I have a datbase of Cars. I have Makes and Models (FK to Makes). I plan on having users track their cars. each Car has a FK to Model. Now, I have a lot of users, and I want to split up my database to distribute load. The Makes and Models tables don't change so much, but they need to be shared across shards. My thought is to use MySQL replication from a master DB of makes and models to each slave database. My question is: Can I safely write to the slave databases assuming I don't write to those tables on the master? And while on the subject, is there anyway to guarantee one slave