replication

How to add a new node to my Elasticsearch cluster

冷暖自知 提交于 2019-11-27 14:29:06
问题 My cluster has a yellow health as it has only one single node, so the replicas remain unasigned simply because no other node is available to contain them. So I want to create/add another node so Elasticsearch can begin allocating replica’s to it. I've only one machine and I'm running ES as a service . I've found tons of site with some info but none of them is giving me clearly how can I add another node to ES. Can someone explain me which files do I've to edit and what commands do I've to

Updating AUTO_INCREMENT value of all tables in a MySQL database

邮差的信 提交于 2019-11-27 13:07:15
It is possbile set/reset the AUTO_INCREMENT value of a MySQL table via ALTER TABLE some_table AUTO_INCREMENT = 1000 However I need to set the AUTO_INCREMENT upon its existing value (to fix M-M replication), something like: ALTER TABLE some_table SET AUTO_INCREMENT = AUTO_INCREMENT + 1 which is not working Well actually, I would like to run this query for all tables within a database. But actually this is not very crucial. I could not find out a way to deal with this problem, except running the queries manually. Will you please suggest something or point me out to some ideas. Thanks Using:

postgresql的hot standby(replication stream)

坚强是说给别人听的谎言 提交于 2019-11-27 11:54:48
PG在9.*版本后热备提供了新的一个功能,那就是Stream Replication的读写分离,是PG高可用性的一个典型应用,也就是我们传统意义上说的Hot-Standby,比如Oracle的DG,mssql的mirror以及Mysql的读写分离等,与其他数据库相比较,有相同点,也有不同点,这些后述。下面是PG的流复制的安装步骤以及测试。 环境: Vmware Workstation 8.0 操作系统:CentOS 6.2 数据库 :PostgreSQL 9.1.3 虚拟主机2台 MASTER: 192.168.2.130 SLAVE: 192.168.2.129 环境参数 [postgres@localhost ~]$ echo $PGHOME /home/postgres [postgres@localhost ~]$ echo $PGDATA /database/pgdata Step1: 安装PG数据库 略,slave端可以只装数据库,不初始化数据库 Step2:创建流复制用户 master端执行: CREATE USER repuser replication LOGIN CONNECTION LIMIT 3 ENCRYPTED PASSWORD ' repuser'; Step3:配置Master端的访问文件pg_hba.conf 增加一行: host

A timeout occured after 30000ms selecting a server using CompositeServerSelector

亡梦爱人 提交于 2019-11-27 09:14:59
I try to deploy my Mongo database in Mongolabs, everything works fine, and I create a new database. Please see my connectionstring. public DbHelper() { MongoClientSettings settings = new MongoClientSettings() { Credentials = new MongoCredential[] { MongoCredential.CreateCredential("dbname", "username", "password") }, Server = new MongoServerAddress("ds011111.mongolab.com", 11111), //ConnectTimeout = new TimeSpan(30000) }; Server = new MongoClient(settings).GetServer(); DataBase = Server.GetDatabase(DatabaseName); } but when I try to connect the database it's shows error like: Ragesh S I am

What is maximum Amazon S3 replication time on file upload?

拜拜、爱过 提交于 2019-11-27 04:33:55
问题 Background We use Amazon S3 in our project as a storage for files uploaded by clients. For technical reasons, we upload a file to S3 with a temporary name , then process its contents and rename the file after it has been processed. Problem The 'rename' operation fails time after time with 404 (key not found) error, although the file being renamed had been uploaded successfully. Amazon docs mention this problem: Amazon S3 achieves high availability by replicating data across multiple servers

How to configure a replica set with MongoDB

本秂侑毒 提交于 2019-11-27 03:38:20
问题 I've got this problem that I can't solve. Partly because I can't explain it with the right terms. I'm new to this so sorry for this clumsy question. Below you can see an overview of my goal. I want configure Replication Set in MongoDB for that i tried like this use local db.dropDatabase() config = { _id: "rs0", members:[ {_id: 0, host: 'localhost:27017'}] } rs.initiate(config) i hope every thing is correct only but here its showing the following error message { "errmsg" : "server is not

INSERT … ON DUPLICATE KEY UPDATE with WHERE?

允我心安 提交于 2019-11-27 00:37:44
I'm doing a INSERT ... ON DUPLICATE KEY UPDATE but I need the update part to be conditional, only doing the update if some extra condition has changed. However, WHERE is not allowed on this UPDATE . Is there any workaround for this? I can't do combinations of INSERT/UPDATE/SELECT since this needs to work over a replication. LouisXIV I suggest you to use IF() to do that. Refer: conditional-duplicate-key-updates-with-mysql INSERT INTO daily_events (created_on, last_event_id, last_event_created_at) VALUES ('2010-01-19', 23, '2010-01-19 10:23:11') ON DUPLICATE KEY UPDATE last_event_id = IF(last

Tomcat: Store session in database [closed]

有些话、适合烂在心里 提交于 2019-11-26 21:39:55
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed last year . I am searching for a way to avoid in-memory session replication/clustering and store the session in a database. Using Tomcat's JDBCStore is useless at this point, because it only stores inactive sessions in the database to save the servers memory. Any suggestions? Thanks upfront Fabian 回答1: If you don't want to

replication between SQL Server and MySQL server

北城余情 提交于 2019-11-26 21:26:26
问题 I want to setup replication between SQL Server and MySQL, in which SQL Server is the primary database server and MySQL is the slave server (on linux). Is there a way to setup such scenario? Help me . 回答1: My answer might be coming too late, but still for future reference ... You can use one of the heterogeneous replication solutions like SymmetricDS: http://www.symmetricds.org/. It can replicate data between any SQL database to any SQL database, altough the overhead is higher than using a

Full complete MySQL database replication? Ideas? What do people do?

删除回忆录丶 提交于 2019-11-26 20:39:11
问题 Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is