postgresql

How to ensure entries with non-overlapping time ranges?

橙三吉。 提交于 2021-02-08 05:16:26
问题 I need to ensure my database only contains entries where two or more of its columns are unique. This can easily be achieved with a UNIQUE constraint over those columns. In my case, I need to forbid duplication only for overlapping time ranges. The table has valid_from and valid_to columns. In some cases one might first need to expire the active entry to by setting valid_to = now , and then inserting a new entry adjusted to valid_from = now and valid_to = infinity . I seem to be able to expire

Avoiding race condition between validation for uniqueness and insertion

你说的曾经没有我的故事 提交于 2021-02-08 05:16:26
问题 I have a Django 1.7 beta 1 project with a standard user signup form. Conceptually, it makes sense for the form validation to fail if the username is already taken. However, the form validation and the saving of the successfully created user model are separate steps, so there's a race condition where the validation can pass but the actual user.save() can fail with an IntegrityError . I'm unclear on what happens if both the form validation and the user.save() step are wrapped in the same

Avoiding race condition between validation for uniqueness and insertion

被刻印的时光 ゝ 提交于 2021-02-08 05:15:21
问题 I have a Django 1.7 beta 1 project with a standard user signup form. Conceptually, it makes sense for the form validation to fail if the username is already taken. However, the form validation and the saving of the successfully created user model are separate steps, so there's a race condition where the validation can pass but the actual user.save() can fail with an IntegrityError . I'm unclear on what happens if both the form validation and the user.save() step are wrapped in the same

PostgreSQL 9.5: Skip first two lines in the text file

佐手、 提交于 2021-02-08 05:01:43
问题 I have the text file to import with the following format: columA | columnB | columnC ----------------------------------------- 1 | A | XYZ 2 | B | XZ 3 | C | YZ I can skip first line by using: WITH CSV HEADER; in copy command, but got stuck while skipping second line. 回答1: If you're using COPY FROM 'filename' , you could instead use COPY FROM PROGRAM to invoke some shell command which removes the header from the file and returns the rest. In Windows: COPY t FROM PROGRAM 'more +2 "C:\Path\To

PostgreSQL 9.5: Skip first two lines in the text file

拟墨画扇 提交于 2021-02-08 05:01:22
问题 I have the text file to import with the following format: columA | columnB | columnC ----------------------------------------- 1 | A | XYZ 2 | B | XZ 3 | C | YZ I can skip first line by using: WITH CSV HEADER; in copy command, but got stuck while skipping second line. 回答1: If you're using COPY FROM 'filename' , you could instead use COPY FROM PROGRAM to invoke some shell command which removes the header from the file and returns the rest. In Windows: COPY t FROM PROGRAM 'more +2 "C:\Path\To

Undefined db connection with knex

拈花ヽ惹草 提交于 2021-02-08 04:49:11
问题 I am executing the following script, node acl.js : acl.js require('dotenv').config() const _ = require('lodash'); const buckets = require('./buckets'); const knex = require('../src/config/db'); //HERE I am getting the ERROR var downSql = 'DROP TABLE IF EXISTS "{{prefix}}{{meta}}";'+ 'DROP TABLE IF EXISTS "{{prefix}}{{resources}}";'+ 'DROP TABLE IF EXISTS "{{prefix}}{{parents}}";'+ 'DROP TABLE IF EXISTS "{{prefix}}{{users}}";'+ 'DROP TABLE IF EXISTS "{{prefix}}{{roles}}";'+ 'DROP TABLE IF

Running total… with a twist

萝らか妹 提交于 2021-02-08 04:43:22
问题 I am trying to figure out the SQL to do a running total for a daily quota system. The system works like this... Each day a user gets a quota of 2 "consumable things". If they use them all up, the next day they get another 2. If they somehow over use them (use more than 2), the next day they still get 2 (they can't have a negative balance). If they don't use them all, the remainder carries to the next day (which can carry to the next, etc...). Here is a chart of data to use as validation. It's

Postgresql - Determining what records are removed from a cascading delete

爱⌒轻易说出口 提交于 2021-02-08 04:01:37
问题 I have a fairly large postgreql database that I've inherited. We have a job that runs ~monthly that backs up the existing database and creates a new database with updated vendor data that we receive. Currently there is a small issue with it. Without going into details of the table setup, what the data is modeling, etc, I believe it can be fixed with a simple delete query, as the tables are set-up to use cascading deletes. However, it takes about 9 hours to generate this database from the

Postgresql - Determining what records are removed from a cascading delete

爷,独闯天下 提交于 2021-02-08 04:00:52
问题 I have a fairly large postgreql database that I've inherited. We have a job that runs ~monthly that backs up the existing database and creates a new database with updated vendor data that we receive. Currently there is a small issue with it. Without going into details of the table setup, what the data is modeling, etc, I believe it can be fixed with a simple delete query, as the tables are set-up to use cascading deletes. However, it takes about 9 hours to generate this database from the

PostgreSQL: Checking for NEW and OLD in a function for a trigger

喜你入骨 提交于 2021-02-08 03:54:45
问题 I want to create a trigger which counts rows and updates a field in an other table. My current solution works for INSERT statements but failes when I DELETE a row. My current function: CREATE OR REPLACE FUNCTION update_table_count() RETURNS trigger AS $$ DECLARE updatecount INT; BEGIN Select count(*) into updatecount From source_table Where id = new.id; Update dest_table set count=updatecount Where id = new.id; RETURN NEW; END; $$ LANGUAGE 'plpgsql'; The trigger is a pretty basic one, looking