failover

Is there a way to auto discover new cluster node IP in Redis Cluster with Lettuce

好久不见. 提交于 2021-01-29 06:58:30
问题 I have a Redis Cluster (3 master and 3 slaves) running inside a Kubernetes cluster. The cluster is exposed via a Kubenetes-Service (Kube-Service) . I have my application server connected to the Redis Cluster (using the Kube-Service as the URI) via the Lettuce java client for Redis. I also have the following client options set on the Lettuce connection object: ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() .enablePeriodicRefresh(Duration

Is there a way to auto discover new cluster node IP in Redis Cluster with Lettuce

爱⌒轻易说出口 提交于 2021-01-29 06:56:49
问题 I have a Redis Cluster (3 master and 3 slaves) running inside a Kubernetes cluster. The cluster is exposed via a Kubenetes-Service (Kube-Service) . I have my application server connected to the Redis Cluster (using the Kube-Service as the URI) via the Lettuce java client for Redis. I also have the following client options set on the Lettuce connection object: ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() .enablePeriodicRefresh(Duration

ProxySQL active-standby setup

守給你的承諾、 提交于 2021-01-27 19:15:39
问题 My setup: Two MySQL servers running with Master-Master replication using third party Tungsten Replicator (for a legacy reasons, can't change that now). Typically this cluster is used as Active-Standby. In normal operation all queries should hit first server. Only in case of first DB server failure queries should hit secondary server. Master-Master is for convinience of not using any master failover scripting. If primary server is back online, all queries should be sent to it. I'm now using

repmgr实现pg流复制失效自动切换

橙三吉。 提交于 2020-04-07 07:04:28
本次测试中用到的配置及脚本见: https://github.com/lxgithub/repmgr_conf_scripts 一、系统 IP HOSTNAME PG VERSION DIR OS 192.168.100.146 node1 9.3.4 /opt/pgsql CentOS6.4_x64 192.168.100.150 node2 9.3.4 /opt/pgsql CentOS6.4_x64 # cat /etc/issue CentOS release 6.5 (Final) Kernel \r on an \m # uname -a Linux barman 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/hosts 127.0.0.1localhost.localdomainlocalhost.localdomainlocalhost4localhost4.localdomain4localhostnode1 ::1localhost.localdomainlocalhost.localdomainlocalhost6localhost6.localdomain6localhostnode1 192

Sometime we need to open encryption key manually after failover and sometime not

天涯浪子 提交于 2020-03-04 16:31:48
问题 I have migrated my database from on premises SQL server, using native restore from url, to managed instance and configured failover group to it. I have opened encryption key on both primary and secondary database but still sometime need to re open encryption key after fail over. 来源: https://stackoverflow.com/questions/60067684/sometime-we-need-to-open-encryption-key-manually-after-failover-and-sometime-not

Detecting batch IP conflict

穿精又带淫゛_ 提交于 2020-01-16 19:05:28
问题 How would you detect an IP conflict? I am trying to implement a fail-over of two systems. Let us assume they take the IPs X.X.X.1 and X.X.X.2 (A and B for convenience), with A as the primary server and B as the back-up. Both A and B will continuously ping X.X.X.1. Should A ever go down, B will detect "request timed out", and convert itself to X.X.X.1 using the following command: netsh int ipv4 set address name="Local Area Connection" source=static address=X.X.X.1 mask=255.255.255.0 gateway

ActiveMQ failover with temporary queues on a network of brokers?

十年热恋 提交于 2020-01-06 15:01:33
问题 We have a network of four brokers, 2 "front-end" and 2 "back-end" (I'll refer to them as FB1 FB2 BB1 BB2). They are networked in a square like so: FB1 .... FB2 . . . . . . BB1 .... BB2 The network connections are set to exclude specific queues but otherwise allow forwarding of all other queues and topics. The network connectors have failover defined between front-end and back-end so if, for example, BB1 went down then FB1 should failover and establish a new network connection to BB2. Clients

Understanding flink savepoints & checkpoints

↘锁芯ラ 提交于 2020-01-06 08:05:43
问题 Considering an Apache Flink streaming-application with a pipeline like this: Kafka-Source -> flatMap 1 -> flatMap 2 -> flatMap 3 -> Kafka-Sink where every flatMap function is a non-stateful operator (e.g. the normal .flatMap function of a Datastream ). How do checkpoints/savepoints work, in case an incoming message will be pending at flatMap 3 ? Will the message be reprocessed after restart beginning from flatMap 1 or will it skip to flatMap 3 ? I am a bit confused, because the documentation

mysql failover: how to choose slave as new master?

跟風遠走 提交于 2020-01-03 13:33:06
问题 I'm mysql newbie. when it comes to fail-over, which slave should be promoted to the new master? For example, A is master, B and C are slaves, and A does async replication to B and C. At some point of time, B receives more data from A than C, A crashes. If we promote C to new master, and changes B's master to C, then what happens to B? It truncates its data to match C? Obviously, B is the best new master candidate, but my question is, how to determine this fact? 回答1: From the MySQL

JMS MessageConsumer Using MessageListener Terminates on ActiveMQ Shutdown

自作多情 提交于 2020-01-03 02:59:10
问题 Trying to have a JMS MessageConsumer survive ActiveMQ reboots, so it can reconnect using the Failover Transport protocol. However, it terminates upon shutdown of ActiveMQ. This looks like a bug that was reported and "resolved", but I'm still seeing this in the latest version of ActiveMQ 5.10.0 I used the following maven dependency <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-all</artifactId> <version>5.10.0</version> </dependency> Here is some sample code using