postgresql-9.3

Creating a trigger for child table insertion returns confusing error

本秂侑毒 提交于 2019-11-29 12:19:05
I am trying to write a trigger function that will input values into separate child tables, however I am getting an error I have not seen before. Here is an example set up: -- create initial table CREATE TABLE public.testlog( id serial not null, col1 integer, col2 integer, col3 integer, name text ); -- create child table CREATE TABLE public.testlog_a (primary key(id)) INHERITS(public.testlog); -- make trigger function for insert CREATE OR REPLACE FUNCTION public.test_log() RETURNS trigger AS $$ DECLARE qry text; BEGIN qry := 'INSERT INTO public.testlog_' || NEW.name || ' SELECT ($1).*'; EXECUTE

Equivalent to exclusion constraint composed of integer and range

淺唱寂寞╮ 提交于 2019-11-29 12:07:39
I need to have something equivalent to this exclusion constraint drop table if exists t; create table t ( i int, tsr tstzrange, exclude using gist (i with =, tsr with &&) ); ERROR: data type integer has no default operator class for access method "gist" HINT: You must specify an operator class for the index or define a default operator class for the data type. I guess the problem is obvious from the error message. How to do it? Erwin Brandstetter You need to install the additional module btree_gist to make it work. The module installs the missing operator class. Details in this related answer:

Django Query sort case-insensitive using Model method with PostgreSQL

若如初见. 提交于 2019-11-29 10:58:14
I'm really new to django, python and postgres... I can't seem to find the answer on how to order_by being case insensitive while using Model as the query method, only if you use direct SQL queries. Model @classmethod def get_channel_list(cls, account): return cls.objects.filter(accountid=account).order_by('-name').values_list('name', 'channelid') Data set and order it's currently being ordered in test b test a test channel a test channel a test 2 a b test Test Channel Test 3 Test 3 Test 2 Channel any help would be much appreciated. Using QuerySet.extra(select=...) : @classmethod def get

“extra data after last expected column” while trying to import a csv file into postgresql

风流意气都作罢 提交于 2019-11-29 10:36:39
问题 I try to copy the content of a CSV file into my postgresql db and I get this error "extra data after last expected column". The content of my CSV is agency_id,agency_name,agency_url,agency_timezone,agency_lang,agency_phone 100,RATP (100),http://www.ratp.fr/,CET,, and my postgresql command is COPY agency (agency_name, agency_url, agency_timezone) FROM 'myFile.txt' CSV HEADER DELIMITER ','; Here is my table CREATE TABLE agency ( agency_id character varying, agency_name character varying NOT

Bad optimization/planning on Postgres window-based queries (partition by(, group by?)) - 1000x speedup

假装没事ソ 提交于 2019-11-29 10:17:38
问题 We are running Postgres 9.3.5. (07/2014) We have quite some complex datawarehouse/reporting setup in place (ETL, materialized views, indexing, aggregations, analytical functions, ...). What I discovered right now may be difficult to implement in the optimizer (?), but it makes a huge difference in performance (only sample code with huge similarity to our query to reduce unnecessary complexity): create view foo as select sum(s.plan) over w_pyl as pyl_plan, -- money planned to spend in this pot

How to insert a updatable record with JSON column in PostgreSQL using JOOQ?

烈酒焚心 提交于 2019-11-29 09:53:31
I followed the answer in Is it possible to write a data type Converter to handle postgres JSON columns? to implement the nodeObject converter. Then I tried to use an updatable record to insert a record, I got "org.jooq.exception.SQLDialectNotSupportedException: Type class org.postgresql.util.PGobject is not supported in dialect POSTGRES" exception." How can I solve this? Following is my code: TableRecord r = create.newRecord(TABLE); ObjectNode node = JsonNodeFactory.instance.objectNode(); r.setValue(TABLE.JSON_FIELD, node, new JsonObjectConverter()); r.store(); Lukas Eder Since jOOQ 3.5, you

Why can I create a table with PRIMARY KEY on a nullable column?

蓝咒 提交于 2019-11-29 09:05:54
The following code creates a table without raising any errors: CREATE TABLE test( ID INTEGER NULL, CONSTRAINT PK_test PRIMARY KEY(ID) ) Note that I cannot insert a NULL, as expected: INSERT INTO test VALUES(1),(NULL) ERROR: null value in column "id" violates not-null constraint DETAIL: Failing row contains (null). ********** Error ********** ERROR: null value in column "id" violates not-null constraint SQL state: 23502 Detail: Failing row contains (null). Why can I create a table with a self-contradictory definition? ID column is explicitly declared as NULLable, and it is implicitly not

Unable to connect to Postgres via PHP but can connect from command line and PgAdmin on different machine

旧巷老猫 提交于 2019-11-29 07:43:24
I've had a quick search around (about 30 minutes) and tried a few bits, but nothing seems to work. Also please note I'm no Linux expert (I can do most basic stuff, simple installs, configurations etc) so some of the config I have may be obviously wrong, but I just don't see it! (feel free to correct any of the configs below) The Setup I have a running instance of PostgreSQL 9.3 on a Red Hat Enterprise Linux Server release 7.1 (Maipo) box. It's also running SELinux and IPTables. IPTables config (added in 80, 443 and 5432.. and also 22, but that was done before...) # sample configuration for

Lock for SELECT so another process doesn't get old data

不打扰是莪最后的温柔 提交于 2019-11-29 07:11:12
I have a table that could have two threads reading data from it. If the data is in a certain state (let's say state 1) then the process will do something (not relevant to this question) and then update the state to 2. It seems to me that there could be a case where thread 1 and thread 2 both perform a select within microseconds of one another and both see that the row is in state 1, and then both do the same thing and 2 updates occur after locks have been released. Question is: Is there a way to prevent the second thread from being able to modify this data in Postgres - AKA it is forced to do

How to set correct attribute names to a json aggregated result with GROUP BY clause?

半腔热情 提交于 2019-11-29 02:55:15
问题 I have a table temp defined like this: id | name | body | group_id ------------------------------- 1 | test_1 | body_1 | 1 2 | test_2 | body_2 | 1 3 | test_3 | body_3 | 2 4 | test_4 | body_4 | 2 I would like to produce a result grouped by group_id and aggregated to json. However, query like this: SELECT group_id, json_agg(ROW(id, name, body)) FROM temp GROUP BY group_id; Produces this result: 1;[{"f1":1,"f2":"test_1","f3":"body_1"}, {"f1":2,"f2":"test_2","f3":"body_2"}] 2;[{"f1":3,"f2":"test