replication

How to split read-only and read-write transactions with JPA and Hibernate

江枫思渺然 提交于 2019-12-02 15:38:37
I have a quite heavy java webapp that serves thousands of requests/sec and it uses a master Postgresql db which replicates itself to one secondary (read-only) database using streaming (asynchronous) replication. So, I separate the request from primary to secondary(read-only) using URLs to avoid read-only calls to bug primary database considering replication time is minimal. NOTE : I use one sessionFactory with a RoutingDataSource provided by spring that looks up db to use based on a key. I am interested in multitenancy as I am using hibernate 4.3.4 that supports it. I have two questions: I

Replicating vector elements by index

妖精的绣舞 提交于 2019-12-02 12:18:06
问题 I have an integer vector: a <- c(1,1,3,1,4) where each element in a indicates how many times its index should be replicated in a new vector. So the resulting vector should be: b <- c(1,2,3,3,3,4,5,5,5,5) What would be the most efficient way to do this? 回答1: For example using rep : rep(seq_along(a),a) 1 2 3 3 3 4 5 5 5 5 Another less efficient option is to use inverse.rle : inverse.rle(list(lengths=a,values=seq_along(a))) [1] 1 2 3 3 3 4 5 5 5 5 来源: https://stackoverflow.com/questions/18993010

转载:MySQL的内存表在主从同步的注意事项

浪子不回头ぞ 提交于 2019-12-02 07:43:24
前言:前天不是做了 Spring AOP根据JdbcTemplate方法名动态设置数据源 吗,今天发现mysql复制有问题没解决: 那就是内存表复制的问题,发现重启后从数据库的表结构还在,但是无法同步主数据库的数据了。 ----------------------------------- 有一些应用程序需要存放一些临时数据,这时候临时表似乎是一个很好的选择,但是内存表在主从数据库上表现却不那么好。 原因很简单,无论是基于STATEMENT还是基于ROW复制,都要在二进制日志中包含改变的数据。这就要求在主从机上数据必须一致。当重启从库的时候,你就会丢失内存表的数据,复制中断。 我们该怎么办呢? 1.使用Innodb表代替 innodb表非常快,能满足我们对性能的需求。 2.在复制中忽略内存表 如果不是非常有必要的话,忽略复制内存表,使用这个选项 replicate-ignore-table=db.memory_table。 我们需要注意的是:STATEMENT复制,不要使用insert ... select 添加数据到内存表。如果要使用insert ... select,从机上的表将会是空的,甚至某些时候,内存表根本不会复制到从机上。还有一种解决办法是在主从机上都部署上相同的计划任务,来刷新这个表。 3.谨慎重启从机 从长远角度考虑我不会使用内存表,作为解决方案。 翻译自:

Redis deployment configuration - master slave replication

旧巷老猫 提交于 2019-12-02 07:34:36
Currently I have two servers which I have deployed node.js/Express.JS based web services API. I am using Redis for caching the JSON strings. What will be the best option deploying this setup in to production? I see here it advices to go with a dedicated server redis. OK. I take it and use a dedicated server for running redis master. Can I use existing app servers as slave nodes? Note : these app servers are running an Node/Express application. What other other options do I have? You can. It all depends on the load that those other servers have, it's a problem of resource sharing. To be honest

My subscriber database lost connection to the publisher and expired. Can my data be saved?

邮差的信 提交于 2019-12-02 05:01:46
I have a publisher database A and I have two subscriber databases B and C that subscribe to A. My application resides locally at sites B and C and through replication, changes at B and/or C are replicated to each other. The problem is since 31 January 2019 C stopped subscribing to A and the IT guys at site C didn't know about it (no alerts). The bigger problem is that during this time, people using the application at B have been entering data which is replicated back to A. At the same time, people at site C have been adding data to database C which was not replicating back. If I reinstate a

Kafka reassignment of __consumer_offsets incorrect?

十年热恋 提交于 2019-12-02 04:22:22
问题 I am confused with how kafka-reassignment-paritions works for __consumer_offsets topic? I start with 1 zk and 1 kafka broker, create a test topic with replication=1, partition=1. consume and produce. works fine. I see __consumer_offsets topic created. Now I add a second broker with, offsets.topic.replication.factor=2 . I run the, kafka-reassign-partitions --zookeeper zookeeper1:2181 --topics-to-move-json-file topics-to-move.json --broker-list "101,102" --generate The generated reassignment

Local replica of RDS database

一个人想着一个人 提交于 2019-12-02 00:57:00
问题 I've been doing some research for the past hour or so and I've been hearing some conflicting information regarding the replication of Amazon RDS databases. My database is pretty big, 15 tables with a total size of 4 GB. So, basically, is it possible for me to create a local replica of a remote RDS InnoDB or does Amazon not allow it? 回答1: you can create replicas of an RDS but only as another RDS. You can't do a replica on an EC2 or a local machine. 来源: https://stackoverflow.com/questions

Kafka reassignment of __consumer_offsets incorrect?

﹥>﹥吖頭↗ 提交于 2019-12-02 00:40:16
I am confused with how kafka-reassignment-paritions works for __consumer_offsets topic? I start with 1 zk and 1 kafka broker, create a test topic with replication=1, partition=1. consume and produce. works fine. I see __consumer_offsets topic created. Now I add a second broker with, offsets.topic.replication.factor=2 . I run the, kafka-reassign-partitions --zookeeper zookeeper1:2181 --topics-to-move-json-file topics-to-move.json --broker-list "101,102" --generate The generated reassignment does not look right. Only shows one replica even though there are 2 live brokers. I was hoping to get

Automatically resolve primary key merge conflict

限于喜欢 提交于 2019-12-01 23:49:49
could you please suggest me the way I could automatically resolve primary key conflicts during a merge between Publisher and Subscriber. It seems Sql Server doesn't do it out of the box :(. Conflict viewer shows me next message: A row insert at '_publisher_server_' could not be propagated to '_subscriber_server_'. This failure can be caused by a constraint violation. Violation of PRIMARY KEY constraint 'PK_ PartPlan _FD9D7F927172C0B5'. Cannot insert duplicate key in object '_table_name_'. Thank you. This isn't an easy solution (since you've presumably already designed your database with auto

How should CouchDB revisions be treated from a design perspective?

余生颓废 提交于 2019-12-01 21:02:55
Near as I can tell, CouchDB revisions are not to be treated like revisions in the document versioning sense of the word. From glancing at other posts, they seem to be regarded as transient data that exists until a coarse-grained compact operation is called. My question is, if I am interested in using CouchDB to maintain documents, as well as a version-history of those documents, should I allow that to be handled natively by CouchDB revisions, or should I build a layer on-top that will survive a compact operation? I am thinking the latter, simply because Couch does not replicate revisions of