postgresql-9.3

Create json with column values as object keys

旧街凉风 提交于 2019-12-05 04:08:42
I have a table defined like this: CREATE TABLE data_table AS ( id bigserial, "name" text NOT NULL, "value" text NOT NULL, CONSTRAINT data_table_pk PRIMARY KEY (id) ); INSERT INTO data_table ("name", "value") VALUES ('key_1', 'value_1'), ('key_2', 'value_2'); I would like to get a JSON object from this table content, which will look like this: { "key_1":"value_1", "key_2":"value_2" } Now I'm using the client application to parse the result set into JSON format. Is it possible to accomplish this by a postgresl query? If you're on 9.4 you can do the following: $ select json_object_agg("name",

Load Balancing using HAProxy for Postgresql 9.4

一曲冷凌霜 提交于 2019-12-05 02:55:55
问题 I have made a setup of multi-master replication of PostgreSQL using BDR (Bi-Directional Replication) among 4 nodes (virtual machines). Now i want to put a load-balancer for High Availability. For this i have installed and configured "HAProxy" on a different virtual machine, which is listening over 5432/tcp to connect. The haproxy configuration is as follows: listen pgsql_bdr *:5432 mode tcp option httpchk balance roundrobin server master 192.168.123.1:5432 check backup server slave1 192.168

How to get the current free disk space in Postgres?

感情迁移 提交于 2019-12-05 01:20:47
I need to be sure that I have at least 1Gb of free disk space before start doing some work in my database. I'm looking for something like this: select pg_get_free_disk_space(); Is it possible? (I found nothing about it in docs). PG: 9.3 & OS: Linux/Windows Craig Ringer PostgreSQL does not currently have features to directly expose disk space. For one thing, which disk? A production PostgreSQL instance often looks like this: /pg/pg94/ : a RAID6 of fast reliable storage on a BBU RAID controller in WB mode, for the catalogs and most important data /pg/pg94/pg_xlog : a fast reliable RAID1, for the

How to escape underscores in Postgresql

南笙酒味 提交于 2019-12-04 22:44:15
When searching for underscores in Postgresql, literal use of the character _ doesn't work. For example, if you wanted to search all your tables for any columns that ended in _by , for something like change log or activity information, e.g. updated_by , reviewed_by , etc., the following query almost works: SELECT table_name, column_name FROM information_schema.columns WHERE column_name LIKE '%_by' It basically ignores the underscore completely and returns as if you'd searched for LIKE '%by' . This may not be a problem in all cases, but it has the potential to be one. How to search for

Selecting all records created less than 1 second apart

折月煮酒 提交于 2019-12-04 17:51:33
I have a table: create table purchase( transaction_id integer, account_id bigint, created timestamp with time zone, price numeric(5,2) ) I think I have a problem where a system is sending me duplicate records, but I don't know how widespread the issue is. I need a query to select all records created within 1 second (not necessary the same second) that has the same account_id and same price. So, for example, I would want to be able to find these two records: +----------------+----------------+-------------------------------+-------+ | transaction_id | account_id | created | price | +-----------

Partition pruning based on check constraint not working as expected

。_饼干妹妹 提交于 2019-12-04 16:39:14
Why is the table "events_201504" included in the query plan below? Based on my query and the check constraint on that table I would expect the query planner to be able to prune it entirely: database=# \d events_201504 Table "public.events_201504" Column | Type | Modifiers ---------------+-----------------------------+--------------------------------------------------------------- id | bigint | not null default nextval('events_id_seq'::regclass) created_at | timestamp without time zone | Indexes: "events_201504_pkey" PRIMARY KEY, btree (id) "events_201504_created_at" btree (created_at) Check

Cast syntax to convert a sum to float

不羁岁月 提交于 2019-12-04 15:00:01
问题 Using PostgreSQL 9.3, I want to convert the calculated values to data type float . My first attempt: SELECT float(SUM(Seconds))/-1323 AS Averag; Gives me this error: syntax error at or near "SUM" My second attempt: SELECT to_float(SUM(Seconds))/-1323 AS Averag; Gives me this error: function to_float(bigint) does not exist 回答1: You need to use the cast syntax: SELECT CAST (SUM(Seconds) AS FLOAT)/-1323 AS Averag; 回答2: There is also the shorthand cast syntax: SELECT sum(seconds) ::float / -1323

Dump and restore of PostgreSQL database with hstore comparison in view fails

穿精又带淫゛_ 提交于 2019-12-04 13:28:08
问题 I have a view which compares two hstore columns. When I dump and restore this database, the restore fails with the following error message: Importing /tmp/hstore_test_2014-05-12.backup... pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 172; 1259 1358132 VIEW hstore_test_view xxxx pg_restore: [archiver (db)] could not execute query: ERROR: operator does not exist: public.hstore = public.hstore LINE 2: SELECT NULLIF(hstore_test_table

Postgres SELECT* FROM table WHERE column-varchar==“string-example”?

半世苍凉 提交于 2019-12-04 12:57:01
I have the following table: CREATE TABLE lawyer ( id SERIAL PRIMARY KEY, name VARCHAR NOT NULL UNIQUE, name_url VARCHAR check(translate(name_url, 'abcdefghijklmnopqrstuvwxyz-', '') = '') NOT NULL UNIQUE ); I want to SELECT * FROM lawyer where name_url = "john-doe" Character literals are put into single quotes: SELECT * FROM lawyer where name_url = 'john-doe'; See the manual for details: https://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS Looking for exact match: select column1, column2 from mytable where column2 like 'string'; Pattern Match can be

Finding the largest group of consecutive numbers within a partition

人盡茶涼 提交于 2019-12-04 12:52:12
I have the following data ordered by player_id and match_date. I would like to find out the group of records that has the maximum number of consecutive runs (4 runs from 2014-04-03 till 2014-04-12 for 3 consecutive times) player_id match_date runs 1 2014-04-01 5 1 2014-04-02 55 1 2014-04-03 4 1 2014-04-10 4 1 2014-04-12 4 1 2014-04-14 3 1 2014-04-19 4 1 2014-04-20 44 2 2014-04-01 23 2 2014-04-02 23 2 2014-04-03 23 2 2014-04-10 23 2 2014-04-12 4 2 2014-04-14 3 2 2014-04-19 23 2 2014-04-20 1 I have come up with the following SQL: select *,row_number() over (partition by ranked.player_id,ranked