downtime

How to move Git repositories and minimize downtime

ε祈祈猫儿з 提交于 2019-12-23 02:57:25
问题 I will be moving Git repositories from an older SCM server to a new one. My main concern (other than fidelity, of course) is to minimize downtime. Here is my plan: On the new machine, clone each repository using git clone --mirror Copy over repo hooks for each repository Disallow access to old server (we use gitosis, so remove access for all users except for the new server) Move the DNS entry so the DNS alias Git users use Perform git pull for each repository on the new server. For each

Datastore and task queue downtime correlation

為{幸葍}努か 提交于 2019-12-14 02:28:18
问题 What correlation is there between datastore and task queue downtime? (I'd like to use the task queue to defer some operations in the case of datastore downtime.) 回答1: The Task Queue should be generally more durable than the datastore, as it's a simpler system, but there's no guarantee that they can't both experience a simultaneous outage. 来源: https://stackoverflow.com/questions/3800252/datastore-and-task-queue-downtime-correlation

Why is my wordpress website so slow and am I having so much downtime?

随声附和 提交于 2019-12-13 06:06:32
问题 I have used Yslow and PageSpeed to find the cause, but I can't seem to figure out why my blog http://www.fotokringarnhem.nl sometimes loads blazing fast (cached files I guess), and other times takes about 10 seconds or longer to load. I am on a shared server, but haven't had problems like this with other websites on shared servers. I'm using cloudflare to speed up my blog to speed things up, but to no avail. Am I missing something? Pingdom reports of last 30 days (also see http://stats

Zero downtime on Heroku

扶醉桌前 提交于 2019-12-12 07:45:52
问题 Is it possible to do something like the Github zero downtime deploy on Heroku using Unicorn on the Cedar stack? I'm not entirely sure how the restart works on Heroku and what control we have over restarting processes, but I like the possibility of zero downtime deploys and up until now, from what I've read, it's not possible There are a few things that would be required for this to work. First off, we'd need backwards compatible migrations. I leave that up to our team to figure out. Secondly,

Zero downtime deployment Nodejs application

对着背影说爱祢 提交于 2019-12-08 05:26:38
问题 I have a Nodejs application that included clustering for being uptime and domain for error handling. Now for achieving zero downtime deployment, I have an instruction but I need help to convert this instruction to Nodejs code (I need an example for it please). This is the instruction: When master starts, give it a symlink to worker code. After deploying new code, update symlink Send a signal to master: fork new workers! Mater tells old workers to shut down, forks new workers from new code.

Migrating `int` to `bigint` in PostgresSQL without any downtime?

[亡魂溺海] 提交于 2019-12-04 03:47:50
问题 I have a database that is going to experience the integer exhaustion problem that Basecamp famously faced back in November. I have several months to figure out what to do. Is there a no-downtime-required, proactive solution to migrating this column type? If so what is it? If not, is it just a matter of eating the downtime and migrating the column when I can? Is this article sufficient, assuming I have several days/weeks to perform the migration now before I'm forced to do it when I run out of

Zero downtime on Heroku

試著忘記壹切 提交于 2019-12-03 12:21:47
Is it possible to do something like the Github zero downtime deploy on Heroku using Unicorn on the Cedar stack? I'm not entirely sure how the restart works on Heroku and what control we have over restarting processes, but I like the possibility of zero downtime deploys and up until now, from what I've read , it's not possible There are a few things that would be required for this to work. First off, we'd need backwards compatible migrations. I leave that up to our team to figure out. Secondly, we'd want to migrate the db right after a push, but before the restart (assuming our migrations are

Migrating `int` to `bigint` in PostgresSQL without any downtime?

安稳与你 提交于 2019-12-01 18:38:58
I have a database that is going to experience the integer exhaustion problem that Basecamp famously faced back in November. I have several months to figure out what to do. Is there a no-downtime-required, proactive solution to migrating this column type? If so what is it? If not, is it just a matter of eating the downtime and migrating the column when I can? Is this article sufficient , assuming I have several days/weeks to perform the migration now before I'm forced to do it when I run out of ids? Laurenz Albe Use logical replication . With logical replication you can have different data

Erlang's 99.9999999% (nine nines) reliability

拟墨画扇 提交于 2019-11-28 14:47:02
问题 Erlang was reported to have been used in production systems for over 20 years with an uptime percentage of 99.9999999%. I did the math as the following: 20*365.25*24*60*60*(1 - 0.999999999) == 0.631 s That means the system only has less than one second of downtime during the period of 20 years. I am not trying to challenge the validity of this, I am just curious about how we can shut down a system (on purpose or by accident) for only 0.631 second. Could anyone who are familiar with large