postgresql-9.3

How to connect to localhost with postgres_fdw?

放肆的年华 提交于 2019-11-30 20:44:38
The idea is that I have local database named northwind , and with postgres_fdw I want to connect with another database named test on localhost (remote connection simulation, for situations like when table in my database is updated, do something in other database like save to history etc..). So I opened psql console and type: CREATE SERVER app_db FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'test', host 'localhost:5432'); As i found in A Look at Foreign Data Wrappers link. Next I also follow the tutorial: CREATE USER MAPPING for postgres SERVER app_db OPTIONS (user 'postgres', password

Postgres how to implement calculated column with clause

我的梦境 提交于 2019-11-30 19:54:58
I need to filter by calculated column in postgres. It's easy with MySQL but how to implement with Postgres SQL ? pseudocode: select id, (cos(id) + cos(id)) as op from myTable WHERE op > 1; Any SQL tricks ? If you don't want to repeat the expression, you can use a derived table: select * from ( select id, cos(id) + cos(id) as op from myTable ) as t WHERE op > 1; This won't have any impact on the performance, it is merely syntactic sugar required by the SQL standard. Alternatively you could rewrite the above to a common table expression: with t as ( select id, cos(id) + cos(id) as op from

How to Auto Increment Alpha-Numeric value in postgresql?

萝らか妹 提交于 2019-11-30 13:58:17
I am using "PostgreSQL 9.3.5" I have a Table ( StackOverflowTable ) with columns (SoId,SoName,SoDob) . I want a Sequence generator for column SoId which is a Alpha-numeric value. I want to auto increment a Alpha-Numeric Value in postgresql. For eg : SO10001, SO10002, SO10003.....SO99999. Edit: If tomorrow i need to generate a Sequence which can be as SO1000E100, SO1000E101,... and which has a good performance. Then what is the best solution! Use sequences and default value for id: postgres=# CREATE SEQUENCE xxx; CREATE SEQUENCE postgres=# SELECT setval('xxx', 10000); setval -------- 10000 (1

postgresql sequence nextval in schema

梦想与她 提交于 2019-11-30 12:30:21
问题 I have a sequence on postgresql 9.3 inside a schema. I can do this: SELECT last_value, increment_by from foo."SQ_ID"; last_value | increment_by ------------+-------------- 1 | 1 (1 fila) but this not Works: SELECT nextval('foo.SQ_ID'); ERROR: no existe la relación «foo.sq_id» LÍNEA 1: SELECT nextval('foo.SQ_ID'); What is wrong ? It says that not exist the relation «foo.sq_id», but it exists. 回答1: The quoting rules are painful. I think you want: SELECT nextval('foo."SQ_ID"'); to prevent case

“extra data after last expected column” while trying to import a csv file into postgresql

我的未来我决定 提交于 2019-11-30 10:46:07
I try to copy the content of a CSV file into my postgresql db and I get this error "extra data after last expected column". The content of my CSV is agency_id,agency_name,agency_url,agency_timezone,agency_lang,agency_phone 100,RATP (100),http://www.ratp.fr/,CET,, and my postgresql command is COPY agency (agency_name, agency_url, agency_timezone) FROM 'myFile.txt' CSV HEADER DELIMITER ','; Here is my table CREATE TABLE agency ( agency_id character varying, agency_name character varying NOT NULL, agency_url character varying NOT NULL, agency_timezone character varying NOT NULL, agency_lang

Are postgres JSON indexes efficient enough compared with classic normalized tables?

不想你离开。 提交于 2019-11-30 10:29:38
问题 Current Postgresql versions have introduced various features for JSON content, but I'm concerned if I really should use them - I mean, there is not yet "best practice" estabilished on what works and what doesn't, or at least I can't find it. I have a specific example - I have a table about objects which, among other things, contains a list of alternate names for that object. All that data will also be included in a JSON column for retrieval purposes. For example (skipping all the other

Why can I create a table with PRIMARY KEY on a nullable column?

柔情痞子 提交于 2019-11-30 08:53:31
问题 The following code creates a table without raising any errors: CREATE TABLE test( ID INTEGER NULL, CONSTRAINT PK_test PRIMARY KEY(ID) ) Note that I cannot insert a NULL, as expected: INSERT INTO test VALUES(1),(NULL) ERROR: null value in column "id" violates not-null constraint DETAIL: Failing row contains (null). ********** Error ********** ERROR: null value in column "id" violates not-null constraint SQL state: 23502 Detail: Failing row contains (null). Why can I create a table with a self

Bad optimization/planning on Postgres window-based queries (partition by(, group by?)) - 1000x speedup

偶尔善良 提交于 2019-11-30 07:39:21
We are running Postgres 9.3.5. (07/2014) We have quite some complex datawarehouse/reporting setup in place (ETL, materialized views, indexing, aggregations, analytical functions, ...). What I discovered right now may be difficult to implement in the optimizer (?), but it makes a huge difference in performance (only sample code with huge similarity to our query to reduce unnecessary complexity): create view foo as select sum(s.plan) over w_pyl as pyl_plan, -- money planned to spend in this pot/loc/year sum(s.booked) over w_pyl as pyl_booked, -- money already booked in this pot/loc/year -- money

PostgreSQL Nested JSON Querying

烈酒焚心 提交于 2019-11-30 06:53:19
问题 On PostgreSQL 9.3.4, I have a JSON type column called "person" and the data stored in it is in the format {dogs: [{breed: <>, name: <>}, {breed: <>, name: <>}]} . I want to retrieve the breed of dog at index 0. Here are the two queries I ran: Doesn't work db=> select person->'dogs'->>0->'breed' from people where id = 77; ERROR: operator does not exist: text -> unknown LINE 1: select person->'dogs'->>0->'bree... ^ HINT: No operator matches the given name and argument type(s). You might need to

How to set correct attribute names to a json aggregated result with GROUP BY clause?

僤鯓⒐⒋嵵緔 提交于 2019-11-30 05:06:46
I have a table temp defined like this: id | name | body | group_id ------------------------------- 1 | test_1 | body_1 | 1 2 | test_2 | body_2 | 1 3 | test_3 | body_3 | 2 4 | test_4 | body_4 | 2 I would like to produce a result grouped by group_id and aggregated to json. However, query like this: SELECT group_id, json_agg(ROW(id, name, body)) FROM temp GROUP BY group_id; Produces this result: 1;[{"f1":1,"f2":"test_1","f3":"body_1"}, {"f1":2,"f2":"test_2","f3":"body_2"}] 2;[{"f1":3,"f2":"test_3","f3":"body_3"}, {"f1":4,"f2":"test_4","f3":"body_4"}] The attributes in the json objects are named