backup

Firestore new database - How do I backup

跟風遠走 提交于 2019-11-28 17:03:06
问题 Does the google firestore database service provides a backup? If so, how do I backup the database and how do I restore in case of an error? 回答1: Update : It is now possible to backup and restore Firebase Firestore using Cloud Firestore managed export and import service You do it by: Create a Cloud Storage bucket for your project - Make sure it's a regional in us-central1 or 2 / multi regional type of bucket Set up gcloud for your project using gcloud config set project [PROJECT_ID] EXPORT

Restore database backup over the network

感情迁移 提交于 2019-11-28 16:11:54
How do you restore a database backup using SQL Server 2005 over the network? I recall doing this before but there was something odd about the way you had to do it. The database is often running as a service under an account with no network access. If this is the case, then you wouldn't be able to restore directly over the network. Either the backup needs to be copied to the local machine or the database service needs to run as a user with the proper network access. You have few options to use a network file as a backup source Map network drive/path, hosting file, under SAME user as MS-SQL

Postgresql 9.2 pg_dump version mismatch

不问归期 提交于 2019-11-28 15:29:40
I am trying to dump a Postgresql database using the pg_dump tool. $ pg_dump books > books.out How ever i am getting this error. pg_dump: server version: 9.2.1; pg_dump version: 9.1.6 pg_dump: aborting because of server version mismatch The --ignore-version option is now deprecated and really would not be a a solution to my issue even if it had worked. How can I upgrade pg_dump to resolve this issue? francs You can either install PostgreSQL 9.2.1 in the pg_dump client machine or just copy the $PGHOME from the PostgreSQL server machine to the client machine. Note that there is no need to initdb

Is there a SQL script that I can use to determine the progress of a SQL Server backup or restore process?

放肆的年华 提交于 2019-11-28 15:13:07
When I backup or restore a database using MS SQL Server Management Studio, I get a visual indication of how far the process has progressed, and thus how much longer I still need to wait for it to finish. If I kick off the backup or restore with a script, is there a way to monitor the progress, or do I just sit back and wait for it to finish (hoping that nothing has gone wrong?) Edited: My need is specifically to be able to monitor the backup or restore progress completely separate from the session where the backup or restore was initiated. Yes. If you have installed sp_who2k5 into your master

DBCC SHRINKFILE on log file not reducing size even after BACKUP LOG TO DISK

自作多情 提交于 2019-11-28 13:57:44
问题 I've got a database, [My DB], that has the following info: SQL Server 2008 MDF size: 30 GB LDF size: 67 GB I wanted to shrink the log file as much as possible and so I started my quest to figure out how to do this. Caveat: I am not a DBA (or even approaching a DBA) and have been progressing by feel through this quest. First, I just went into SSMS, DB properties, Files, and edited the Initial Size (MB) value to 10. That reduced the log file to 62 GB (not exactly the 10 MB that I entered). So,

Moving a Subversion repository to another server

馋奶兔 提交于 2019-11-28 13:42:24
问题 I have a server that hosts my Subversion code base. That server is currently a Windows Server 2003 box, and my IT administrator wants to update it to Windows Server 2008. This means that I'm going to need to move my Subversion repository while the server gets built up and was wondering what the best practices are for moving the repository to a new server. It seems like, looking online, the recommended way is to use: svnadmin dump /path/to/repository > repository-name.dmp And then use:

How do you organise multiple git repositories, so that all of them are backed up together?

与世无争的帅哥 提交于 2019-11-28 13:15:56
问题 With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like

Get all SharedPreferences names and all their keys?

谁说胖子不能爱 提交于 2019-11-28 12:16:07
I'm making backup program, that saved the phone SharedPreferences data into a file of my own structure. But I don't know how to list them all, which I need: For example, 2 program saved their SharedPreferences with names "Program A" and "Program B" . I need to obtains thay String array contain these 2 names. Then, I use getSharedPreferences with "Program A", I need to get all keys that program has saved. Is it really possible? EDIT1: I DON'T know what programs/activities on the phone. I want to get ALL the keys that every programs saved. It just like you back up all your phone data, but only

Backup neo4j community edition offline in unix: mac or linux

半腔热情 提交于 2019-11-28 09:20:25
Previously I had a problem when making a 'backup' as shown in this question where I get an error when trying to restore the database because I did a copy when the database was running. So I did an experiment with a new database from another computer (this time with ubuntu) I tried this: I created some nodes and relations, very few like 10 (the matrix example). Then I stopped the service neo4j I copied the folder data that contains graph.db to another location After that I deleted the graph.db folder and started neo4j It created automatically a new graph.db folder and the database runs as new

Is copying /var/lib/mysql a good alterntive to mysqldump?

我的未来我决定 提交于 2019-11-28 07:12:58
Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump. are all informations needed contained in that directory? can single tables be imported in another mysql? can there be problems while restoring those files on a (probably slightly) different mysql server version? Yes Yes if the table is using the MyISAM (default) engine. Not if it's using InnoDB. Probably not, and if there is, you just need to execute mysql_upgrade to fix them To avoid getting databases in a inconsistent