replication

TCP Provider: The semaphore timeout period has expired

让人想犯罪 __ 提交于 2019-12-06 13:12:54
In Sql Server2008 R2 and Windows Server 2008R2, when I start Replication I have this error: TCP Provider: The semaphore timeout period has expired Please help me to solve it. in order to fix the error: System.IO.IOException: The semaphore timeout period has expired. ​Solution​ I have done below NIC changes to resolve this error. Open the Network and Sharing Center. Right click on Ethernet Adapter>>Properties Select Client for Microsoft Networks Click on Configure Tab Click on Advanced Tab And then - Disable the below settings IPv4 Checksum offload Large Send Offload V1(Ipv4) Large Send Offload

Can I set up a filtered, star-pattern database replication?

血红的双手。 提交于 2019-12-06 12:35:40
问题 We have a client that needs to set up N local databases, each one containing one site's data, and then have a master corporate database containing the union of all N databases. Changes in an individual site database need to be propagated to the master database, and changes in the master database need to be propagated to the appropriate individual site database. We've been using MySQL replication for a client that needs two databases that are kept simultaneously up to date. That's a

SQL Server replication without deletes?

北慕城南 提交于 2019-12-06 11:55:13
Is there a way to replicate a sql server database but not push out deletes to the subscribers? Do this....Drop the article. Create a new storedprocedure in the corresponding database that mimicks the system store procedure (sp_del...) and contains the same parameter but does nothing. Add the article again...and set the delete store procedure under the article's properties to the new delete stored procedure that you created.... Or you can select Do not replicate Delete Statements....I think that works but i haven't tried it. You don't mention which version of SQL Server you're running, but Andy

How do I add a new replicated table to a SQL Server 2005 DB that is in merge replication?

耗尽温柔 提交于 2019-12-06 11:25:05
问题 We have merge replication set up over a distributed environment (50 to 1500km between offices) for a SQL Server 2005 database of about 350Gb. We now need to add a couple of new tables that must also be in replication, but without pushing the new snapshot to all the subscribers. Is this possible, and if so, what would be the best way to go about doing this? 回答1: sp_addmergearticle - Adds an article to an existing merge publication. This stored procedure is executed at the Publisher on the

CouchDB replication strategy with dynamic groups of users

落花浮王杯 提交于 2019-12-06 10:01:24
问题 This is the situation: We have a series of users who share some documents. The documents they can share might change throughout the day, so can the documents themselves (changes and deletions). The users can change some information on the documents. E.g. Users | Documents A | X A | Y A | Z B | X B | Z C | Y Possible groups: A+C, A+B The server on CouchDB is a replica of a SQL Server DB with this data, an ETL takes care of managing changes on CouchDB. However, the CouchDB database is

Keep Solr slaves in sync

与世无争的帅哥 提交于 2019-12-06 09:32:12
We have a master-slave setup running Solr 6.5.0. There is a backend process running 24/7 which pushes its data towards the master server. No commit is done on master. The web frontend is accessing the slave. Replication poll interval is 1 hour. All is fine so far, but now as the traffic grows, the CPU load on slave is really high. I thought the best thing would be to add a second slave to the master and let the web servers connect via existing load balancers to the two Solr slave machines. I think that the two Solr slaves will handle their replication independently and each slave will poll the

Snapshot of EBS volume used for replication

与世无争的帅哥 提交于 2019-12-06 09:12:16
I setup an EC2 instance with MySQL on EBS volume and setup another instance which acts as Slave for Replication. The replication set up was fine. My question is about taking snapshots of these volumes. I noticed that the tables need to be locked for the snapshot process which may cause inconvenience for the users. So, my idea is to leave the Master instance alone and take a snapshot of instance acting as slave. Is this a good idea? Is there anyone out with a similar setup and could guide me in a right way? Also, taking snapshot of slave instance would require locking of tables. Would that mean

Is there a useDirtyFlag option for Tomcat 6 cluster configuration?

梦想与她 提交于 2019-12-06 08:25:32
问题 In Tomcat 5.0.x you had the ability to set useDirtyFlag="false" to force replication of the session after every request rather than checking for set/removeAttribute calls. <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.SimpleTcpReplicationManager" expireSessionsOnShutdown="false" **useDirtyFlag="false"** doClusterLog="true" clusterLogName="clusterLog"> ... The comments in the server.xml stated this may be used to

Is two way sync between gerrit and github.com possible?

我的未来我决定 提交于 2019-12-06 08:04:52
For a project exisitng in github.com private repository, I am setting up gerrit code review. I am using the gerrit's replication plugin to keep the gerrit repository in sync with github.com. But if someone commits (say commit-a ) and pushes directly to github.com, the commit-a is overwritten in github.com, when gerrit does the replication process (because, it replicates only the things in gerrit mirror). But I want to implement a 2-way sync. Something like, whenever a push is made to gerrit, it has to check github.com and update its mirror with new code from there and then to continue the

Step by step instruction for secure replication?

北慕城南 提交于 2019-12-06 06:39:25
问题 Not sure if the question should rather be on ServerFault? I have a couchDB setup on my server using Apache credentials (but I can switch that off if it is an distraction). I have local instances on various laptops. Now I want to setup secure (continuous) replication. From my understanding I could use username/password, SSL certificates or OAuth. I found bits and pieces of information: SSL Question on serverFault the WIKI entry on replication how to on replication in the wiki the gist on 1.2