replication

Is Guid the best identity datatype for Databases?

扶醉桌前 提交于 2019-11-28 20:36:49
问题 It is connected to BI and merging of data from different data sources and would make that process more smooth. And is there an optimal migration strategy from a database without Guids to a version with Guids without information losses? 回答1: Edited after reading Frans Bouma's answer, since my answer has been accepted and therefore moved to the top. Thanks, Frans. GUIDs do make a good unique value, however due to their complex nature they're not really human-readable, which can make support

SQL Server 2008 replication failing with: process could not execute 'sp_replcmds'

我们两清 提交于 2019-11-28 18:41:05
I have an issue with SQL replication that I am having trouble fixing. What I am doing is restoring two DBs from a production backup, and then installing replication between them. The replication seems to be configured without any errors, but when I look at the status I see error messages like this: Error messages: The process could not execute 'sp_replcmds' on 'MYSERVER1'. Get help: http://help/MSSQL_REPL20011 Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer,

MySQL: Very slow update/insert/delete queries hanging on “query end” step

烈酒焚心 提交于 2019-11-28 16:53:13
I have a large and heavy loaded mysql database which performs quite fast at times, but some times get terribly slow. All tables are InnoDB , server has 32GB of RAM and database size is about 40GB . Top 20 queries in my slow_query_log are update , insert and delete queries and I cannot understand why they are so slow (up to 120 seconds sometimes!) Here is the most frequent query: UPDATE comment_fallows set comment_cnt_new = 0 WHERE user_id = 1; Profiling results: mysql> set profiling = 1; Query OK, 0 rows affected (0.00 sec) mysql> update comment_fallows set comment_cnt_new = 0 where user_id =

How to configure a replica set with MongoDB

梦想的初衷 提交于 2019-11-28 10:22:03
I've got this problem that I can't solve. Partly because I can't explain it with the right terms. I'm new to this so sorry for this clumsy question. Below you can see an overview of my goal. I want configure Replication Set in MongoDB for that i tried like this use local db.dropDatabase() config = { _id: "rs0", members:[ {_id: 0, host: 'localhost:27017'}] } rs.initiate(config) i hope every thing is correct only but here its showing the following error message { "errmsg" : "server is not running with --replSet", "ok" : 0 } Anything wrong i did here ? Any ideas ? yaoxing You can actually follow

replicating elements in list

痴心易碎 提交于 2019-11-28 10:05:13
问题 Say I have this b = 3 l = [1, 2] I want to modify l so that each element appears as many times as b. So that: l = [1, 1, 1, 2, 2, 2] I used this: for x in l: for m in range(b): l.append(x) But it resulted in an infinite loop. Any help would be appreciated. I would prefer you guys to give ideas rather than give me the code. Thanks. 回答1: My 2 cents : Another way to achieve this would also be to take advantage of the fact you can multiply lists in Python. >>> l = [1, 2, 3] >>> [1, 2, 3] * 3 [1,

Full complete MySQL database replication? Ideas? What do people do?

北城以北 提交于 2019-11-27 21:16:37
Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every

CouchDB - Filtered Replication - Can the speed be improved?

我与影子孤独终老i 提交于 2019-11-27 18:06:15
问题 I have a single database (300MB & 42,924 documents) consisting of about 20 different kinds of documents from about 200 users. The documents range in size from a few bytes to many KiloBytes (150KB or so). When the server is unloaded, the following replication filter function takes about 2.5 minutes to complete. When the server is loaded, it takes >10 minutes. Can anyone comment on whether these times are expected, and if not, suggest how I might optimize things in order to get better

How can I slow down a MySQL dump as to not affect current load on the server?

不羁岁月 提交于 2019-11-27 17:06:26
While doing a MySQL dump is easy enough, I have a live dedicated MySQL server that I am wanting to setup replication on. To do this, I need dumps of the databases to import to my replication slave. The issue comes when I do the dumps, MySQL goes full force at it and ties up resources to the sites that connecting to it. I am wondering if there is a way to limit the dump queries to a low priority state to which preference is given to live connections? The idea being that the load from external sites is not affected by the effort of MySQL to do a full dump... CA3LE I have very large databases

Replicate each row of data.frame and specify the number of replications for each row?

北城余情 提交于 2019-11-27 15:49:51
I am programming in R and I got the following problem: I have a data String jb, that is quite long. Heres a simple version of it: jb: a b frequency jb.expanded: a b 5 3 2 5 3 5 7 1 5 3 9 1 40 5 7 12 4 5 9 1 12 5 13 9 1 ... ... I want to replicate the rows and the frequency of the replication is the column frequency. That means, the first row is replicated two times, the second row is replicated 1 time and so on. I already solved that problem with the code jb.expanded <- jb[rep(row.names(jb), jb$freqency), 1:2] Now here is the problem: Whenever any number in the frequency corner is greater than

How can “set timestamp” be a slow query?

爷,独闯天下 提交于 2019-11-27 15:03:28
My slow query log is full of entries like the following: # Query_time: 1.016361 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0 SET timestamp=1273826821; COMMIT; I guess the set timestamp command is issued by replication but I don't understand how set timestamp can take over a second. Any ideas of how to fix this? Timestamp is a data type and a built-in function in MySQL. What are you trying to achive with the following statement? SET timestamp=1273826821; UPD : I am sorry, I didn't know about the used MySQL hacks. It seems that SET TIMESTAMP is used as a solution to exclude some queries