database-backups

Backup SQL Server database from windows form application by a button click [closed]

夙愿已清 提交于 2019-12-04 21:22:42
Closed. This question is off-topic . It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . I need to backup a database by clicking a button on the windows application form. I'm developing it on Visual Studio 2012 in C#. In Windows site, I learned to backup using Transact SQL. I tried it from Transact SQL editor in visual studio. Here is the SQL transact I used: USE TestDB; GO BACKUP DATABASE TestDB TO DISK = 'E:\aa.Bak' WITH FORMAT, MEDIANAME = 'Z_SQLServerBackups', NAME = 'Full Backup of

pg_dump: too many command line arguments

拜拜、爱过 提交于 2019-12-04 15:39:07
问题 what is wrong with this command: pg_dump -U postgres -W admin --disable-triggers -a -t employees -f D:\ddd.txt postgres This is giving error of too many command-line arguments 回答1: Looks like its the -W option. There is no value to go with that option. -W, --password force password prompt (should happen automatically) If you want to run the command without typing is a password, use a .pgpass file. http://www.postgresql.org/docs/9.1/static/libpq-pgpass.html 回答2: For posterity, note that pg

What's the fastest way to import a large mysql database backup?

守給你的承諾、 提交于 2019-12-04 10:03:55
What's the fastest way to export/import a mysql database using innodb tables? I have a production database which I periodically need to download to my development machine to debug customer issues. The way we currently do this is to download our regular database backups, which are generated using "mysql -B dbname" and then gzipped. We then import them using "gunzip -c backup.gz | mysql -u root". From what I can tell from reading "mysqldump --help", mysqldump runs wtih --opt by default, which looks like it turns on a bunch of the things that I can think of that would make imports faster, such as

How do I do incremental backups for SQLite?

天大地大妈咪最大 提交于 2019-12-04 08:07:39
I have a program that saves logging data to an SQLite3 database. I would like to back up the database while the program is still running. I have accomplished this by using the SQLite Online Backup API ( http://www.sqlite.org/backup.html ) and it works fine, however it lags the process until the backup is complete... Does anyone know of a way to do incremental backups in SQLite? I would preferably only backup new data, not the entire database each time I run the backup. I don't think there is a general purpose solution to your problem. If your logging data is timestamped and reasonably simple

pg_dump vs pg_dumpall? which one to use to database backups?

非 Y 不嫁゛ 提交于 2019-12-04 07:48:49
问题 I tried pg_dump and then on a separate machine I tried to import the sql and populate the database, I see CREATE TABLE ERROR: role "prod" does not exist CREATE TABLE ERROR: role "prod" does not exist CREATE TABLE ERROR: role "prod" does not exist CREATE TABLE ERROR: role "prod" does not exist ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE WARNING: no privileges could be revoked for "public" REVOKE ERROR: role "postgres" does not exist ERROR: role "postgres

Postgresql - backup database and restore on different owner?

♀尐吖头ヾ 提交于 2019-12-04 07:46:31
问题 I did backup on database on different server and that has different role than I need, with this command: pg_dump -Fc db_name -f db_name.dump Then I copied backup to another server where I need to restore the database, but there is no such owner that was used for that database. Let say database has owner owner1 , but on different server I only have owner2 and I need to restore that database and change owner. What I did on another server when restoring: createdb -p 5433 -T template0 db_name pg

MongoDB backup plan

旧街凉风 提交于 2019-12-04 07:43:56
问题 I want to switch from MySQL to MongoDB but great data losses (more than 1 hour) are not acceptable for me. I need to have 3 backup plans: Hourly backup plan . Data is flushed to disk every X minutes and if something wrong with the server I shall be sure that after reboot it will have all data at least for an hour ago. Can I configure it? Daily backup plan . Data is synced to backup disk every day so even if server explodes I can recover data for yesterday in some hours. Should I use fsync,

mySQL daily backup from one table to another

♀尐吖头ヾ 提交于 2019-12-04 05:57:06
问题 If I have 2 tables with the same definition, how would I backup data from it daily? Can I use mySQL Administrator to perform something like this At 12:00am everyday, copy all the rows from main_table to backup_table It will be preferable if it is incremental backup as some changes will be made to the reccords in backup_table and I don't want a new backup to wipe out those changes. Thanks 回答1: Let's start with this: Copying data from one table to another on the same server IS NOT a backup. Now

Skip or ignore definers in Mysqldump

荒凉一梦 提交于 2019-12-04 05:06:29
I was wondering if I can prevent mysqldump inserting this commands /*!50017 DEFINER=`root`@`localhost`*/ Or if I have to do it afterwards with sed, for example Thanks! This issue has been around since 2006 with no sign of ever being fixed. I have, however, piped it through grep (linux only) to trim out the definer lines before writing the dump file: mysqldump -u dbuser -p dbname | grep -v 'SQL SECURITY DEFINER' > dump.sql A bit of a mouthful (or keyboardful?) but I think it's the only way. 来源: https://stackoverflow.com/questions/11901344/skip-or-ignore-definers-in-mysqldump

Restore PostgreSQL db from backup without foreign key constraint issue

懵懂的女人 提交于 2019-12-04 03:42:26
I have a postgresql db with about 85+ tables. I make backups regularly using pg_dump (via php-pgadmin) in copy mode and the size of the backup file is almost 10-12 MB. Now the problem I am facing is that whenever I try to restore the database, foreign key constraint problem occur. The scenario is as follows: There are two tables: 1) users and 2) zones . I have stored the id of zone in users table to identify the user's zone and have set it as foreign key. When I take the db dump, the entries for table zones come only after that of table users . I think it's due to the first letter of table