backup

Is “git push --mirror” sufficient for backing up my repository?

心不动则不痛 提交于 2019-11-27 18:06:11
I'm a solo developer, working in a local Git repository. For backups, I want to send an exact copy of that repository off to another server. Is it sufficient to do this? git push --mirror I'm asking because I can sometimes run this command two or three times before Git tells me "Everything up-to-date", so apparently it's not an exact mirror. It seems to be re-pushing tracking branches...? $ git push --mirror Counting objects: 42, done. Delta compression using up to 8 threads. Compressing objects: 100% (30/30), done. Writing objects: 100% (30/30), 5.09 KiB, done. Total 30 (delta 17), reused 0

Backup AWS Dynamodb to S3

天大地大妈咪最大 提交于 2019-11-27 18:00:46
It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples,

Piping data on Windows command prompt

橙三吉。 提交于 2019-11-27 17:54:22
问题 I need to get a backup dump of a large (~8gb) svn repository. My current method involves using svnadmin dump to a file and then using 7-Zip to compress and split the file. > svnadmin dump c:\path\to\myrepo > c:\svndump.svn > 7z svndump.svn svndump.7z // or whatever the correct syntax is I was wondering if there would be a way to skip the middle-man here, and get the svn dump data to be compressed in one go by using pipes or something? Is this possible? What would the syntax be? 回答1: svnadmin

s3cmd failed too many times

孤街浪徒 提交于 2019-11-27 17:35:58
I used to be a happy s3cmd user. However recently when I try to transfer a large zip file (~7Gig) to Amazon S3, I am getting this error: $> s3cmd put thefile.tgz s3://thebucket/thefile.tgz .... 20480 of 7563176329 0% in 1s 14.97 kB/s failed WARNING: Upload failed: /thefile.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=1.25) WARNING: Waiting 15 sec... thefile.tgz -> s3://thebucket/thefile.tgz [1 of 1] 8192 of 7563176329 0% in 1s 5.57 kB/s failed ERROR: Upload of 'thefile.tgz' failed too many times. Skipping that file. I am using the latest s3cmd on Ubuntu . Why is it

SQL Server command line backup statement

那年仲夏 提交于 2019-11-27 17:23:58
Does any one know if there is a way to script out SQL Server backup in to a batch file, so that it could be executed from a command line? Here's an example you can run as a batch script (copy-paste into a .bat file), using the SQLCMD utility in Sql Server client tools: BACKUP: echo off cls echo -- BACKUP DATABASE -- set /p DATABASENAME=Enter database name: :: filename format Name-Date (eg MyDatabase-2009.5.19.bak) set DATESTAMP=%DATE:~-4%.%DATE:~7,2%.%DATE:~4,2% set BACKUPFILENAME=%CD%\%DATABASENAME%-%DATESTAMP%.bak set SERVERNAME=your server name here echo. sqlcmd -E -S %SERVERNAME% -d master

How can I slow down a MySQL dump as to not affect current load on the server?

不羁岁月 提交于 2019-11-27 17:06:26
While doing a MySQL dump is easy enough, I have a live dedicated MySQL server that I am wanting to setup replication on. To do this, I need dumps of the databases to import to my replication slave. The issue comes when I do the dumps, MySQL goes full force at it and ties up resources to the sites that connecting to it. I am wondering if there is a way to limit the dump queries to a low priority state to which preference is given to live connections? The idea being that the load from external sites is not affected by the effort of MySQL to do a full dump... CA3LE I have very large databases

meteor: how can I backup my mongo database

為{幸葍}努か 提交于 2019-11-27 17:01:27
How can I make a backup of my meteor mongo database? If I run: meteor mongo the mongodump command does not work inside the meteor mongoshell kask First you need to spin up meteor. Then if you run meteor mongo you will get an output something like this: MongoDB shell version: 2.2.1 connecting to: 127.0.0.1:3001/meteor Meteor db host is at 127.0.0.1 with a port of 3001. Exit the mongo shell and use mongodump from your terminal. mongodump -h 127.0.0.1 --port 3001 -d meteor Dumps will be located under the dumps folder in the folder you executed the above command. You can import your db back to

Tar a directory, but don't store full absolute paths in the archive

為{幸葍}努か 提交于 2019-11-27 16:42:37
I have the following command in the part of a backup shell script: tar -cjf site1.bz2 /var/www/site1/ When I list the contents of the archive, I get: tar -tf site1.bz2 var/www/site1/style.css var/www/site1/index.html var/www/site1/page2.html var/www/site1/page3.html var/www/site1/images/img1.png var/www/site1/images/img2.png var/www/site1/subdir/index.html But I would like to remove the part /var/www/site1 from directory and file names within the archive, in order to simplify extraction and avoid useless constant directory structure. Never know, in case I would extract backuped websites in a

How can I backup a remote SQL Server database to a local drive?

我们两清 提交于 2019-11-27 16:38:00
I need to copy a database from a remote server to a local one. I tried to use SQL Server Management Studio, but it only backs up to a drive on the remote server. Some points: I do not have access to the remote server in a way that I could copy files; I do not have access to setup a UNC path to my server; Any ideas of how can I copy this database? Will I have to use 3rd party tools? Daniel Gill In Microsoft SQL Server Management Studio you can right-click on the database you wish to backup and click Tasks -> Generate Scripts. This pops open a wizard where you can set the following in order to

C#/SQL : backup and restore by copy and replace database files? [closed]

≯℡__Kan透↙ 提交于 2019-11-27 16:29:31
First of all, this is a some kind of share knowledge NOT a question. I have faced some problems with creating backup and restore of database by the default method of using backup and restore commands so i have developed my own one by coping the database files and get them back when needed. I will share it in answer to help others. Solution : First of all you have to know that before any copy or replace database files, you have to set the database in offline state and get it back online after finish. 1) Used Methods // fullPath : the path for your database // executablePath : the path for your