postgresql-9.4

Postgresql 9.4 query gets progressively slower when joining TSTZRANGE with &&

感情迁移 提交于 2021-02-07 12:19:45
问题 I am running a query that gets progressively slower as records are added. Records are added continuously via an automated process (bash calling psql). I would like to correct this bottle neck; however, I don't know what my best option is. This is the output from pgBadger: Hour Count Duration Avg duration 00 9,990 10m3s 60ms <---ignore this hour 02 1 60ms 60ms <---ignore this hour 03 4,638 1m54s 24ms <---queries begin with table empty 04 30,991 55m49s 108ms <---first full hour of queries

update vachar column to date in postgreSQL

半城伤御伤魂 提交于 2021-02-04 06:31:07
问题 Trying to update a column with varchar datatype e.g. '1950-08-14' to a date datatype using UPDATE tablename SET columnname = to_date(columnname, 'YYYY-MM-DD'); or ALTER TABLE tablename ALTER COLUMN columnname TYPE DATE USING to_date(columnname, 'YYYY-MM-DD'); but both return the error message ERROR: invalid value "columnname" for "YYYY" DETAIL: Value must be an integer. Referencing http://www.postgresql.org/docs/9.4/static/functions-formatting.html 回答1: The query: ALTER TABLE tablename ALTER

update vachar column to date in postgreSQL

北慕城南 提交于 2021-02-04 06:28:39
问题 Trying to update a column with varchar datatype e.g. '1950-08-14' to a date datatype using UPDATE tablename SET columnname = to_date(columnname, 'YYYY-MM-DD'); or ALTER TABLE tablename ALTER COLUMN columnname TYPE DATE USING to_date(columnname, 'YYYY-MM-DD'); but both return the error message ERROR: invalid value "columnname" for "YYYY" DETAIL: Value must be an integer. Referencing http://www.postgresql.org/docs/9.4/static/functions-formatting.html 回答1: The query: ALTER TABLE tablename ALTER

Is it possible to rebuild pg_depend?

自闭症网瘾萝莉.ら 提交于 2021-01-29 09:31:14
问题 I have a PostgreSQL 9.4 database affected by the following bug in BDR: https://github.com/2ndQuadrant/bdr/issues/309 In a nutshell, that bug in BDR resulted in missing dependencies in the pg_depend system catalog. Now when I use pg_dump , objects are dumped out of order and the dump can't be used without manual editing. Is there a way to make PostgreSQL rebuild the dependencies in pg_depend without rebuilding the database from scratch? 回答1: No, because that information is not redundant (that

PostgreSQl function return multiple dynamic result sets

我与影子孤独终老i 提交于 2021-01-28 06:26:27
问题 I have an old MSSQL procedure that needs to be ported to a PostgreSQL function. Basically the SQL procedure consist in a CURSOR over a select statement. For each cursor entity i have three select statements based on the current cursor output. FETCH NEXT FROM @cursor INTO @entityId WHILE @@FETCH_STATUS = 0 BEGIN SELECT * FROM table1 WHERE col1 = @entityId SELECT * FROM table2 WHERE col2 = @entityId SELECT * FROM table3 WHERE col3 = @entityId END The tables from the SELECT statements have

SELECT fixed number of rows by evenly skipping rows

 ̄綄美尐妖づ 提交于 2021-01-27 18:41:06
问题 I am trying to write a query which returns an arbitrary sized representative sample of data. I would like to do this by only selecting n th rows where n is such that the entire result set is as close as possible to an arbitrary size. I want this to work in cases where the result set would normally be less than the arbitrary size. In such a case, the entire result set should be returned. I found this question which shows how to select every n th row. Here is what I have so far: SELECT * FROM (