postgresql-10

Postgres timestamp with microseconds

拜拜、爱过 提交于 2021-02-11 12:18:28
问题 I have a column in a table with datatype set as timestamp without time zone . I need the time part with microseconds(6 digits), but sometimes if the last digit is zero, the microseconds part ignores it. I am able to query it with the below query to get 6 digits select to_char(now(), 'yyyy-mm-dd hh:mi:us'); 2020-07-16 12:05:598000 The above output is in text/char format but I need it in timestamp without time zone datatype with all 6 digits for microseconds even if one or some of the last

Interpreting Results From Explain Analyze in Postgres

不打扰是莪最后的温柔 提交于 2021-02-11 06:24:47
问题 Recently did a query which took about 9 minutes to complete. Attempting to determine why, I used EXPLAIN ANALYZE to help solve the problem. From the output, it looks as though everything has the appropriate indexes, it's just taking an extremely long time. I've put the query and the results below. Is it just taking this long due to the amount of data? Or is there something which I am doing wrong. Does my query need to fundamentally change, in order to improve the performance? Additional Info:

Interpreting Results From Explain Analyze in Postgres

不羁岁月 提交于 2021-02-11 06:23:21
问题 Recently did a query which took about 9 minutes to complete. Attempting to determine why, I used EXPLAIN ANALYZE to help solve the problem. From the output, it looks as though everything has the appropriate indexes, it's just taking an extremely long time. I've put the query and the results below. Is it just taking this long due to the amount of data? Or is there something which I am doing wrong. Does my query need to fundamentally change, in order to improve the performance? Additional Info:

Concatenate JSON rows

▼魔方 西西 提交于 2021-01-28 20:09:21
问题 I have the following table with sample records: create table jtest ( id int, jcol json ); insert into jtest values(1,'{"name":"Jack","address1":"HNO 123"}'); insert into jtest values(1,'{"address2":"STREET1"}'); insert into jtest values(1,'{"address3":"UK"}'); select * from jtest; id jcol ------------------------------------------- 1 {"name":"Jack","address":"HNO 123 UK"} 1 {"address2":"STREET1"} 1 {"address3":"UK"} Expected result: id jcol ----------------------------------------------------

Get rid of double quotation marks with SQLalchemy for PostgreSQL

独自空忆成欢 提交于 2020-05-29 07:12:48
问题 I'm trying to import 200 SAS XPT files to my PostgreSQL database: engine = create_engine('postgresql://user:pwd@server:5432/dbName') for file in listdir(dataPath): name, ext = file.split('.', 1) with open(join(dataPath, file), 'rb') as f: xport.to_dataframe(f).to_sql(name, engine, schema='schemaName', if_exists='replace', index=False) print("Successfully wrote ", file, " to database.") However, the SQL generated has double quotation marks for all identifiers, for example: CREATE TABLE "Y2009"

postgres_fdw: possible to push data to foreign server for join?

 ̄綄美尐妖づ 提交于 2020-05-26 02:32:26
问题 suppose I have a query like select * from remote_table join local_table using(common_key) where remote_table is a FOREIGN TABLE with postgres_fdw and local_table is a regular table. local_table is small (100 rows) and remote_table is large (millions of rows). It looks like the remote table is pulled in its entirety and joined locally, when it would be more efficient to ship the smaller table to the remote server and join remotely. Is there a way to get postgres_fdw to do that? 回答1: You cannot

postgres_fdw: possible to push data to foreign server for join?

末鹿安然 提交于 2020-05-26 02:31:04
问题 suppose I have a query like select * from remote_table join local_table using(common_key) where remote_table is a FOREIGN TABLE with postgres_fdw and local_table is a regular table. local_table is small (100 rows) and remote_table is large (millions of rows). It looks like the remote table is pulled in its entirety and joined locally, when it would be more efficient to ship the smaller table to the remote server and join remotely. Is there a way to get postgres_fdw to do that? 回答1: You cannot

Foreign-data wrapper “postgres_fdw” does not exist (even if it does)

让人想犯罪 __ 提交于 2020-04-30 06:31:19
问题 Using PostgreSQL 10.10, from superuser postgres : CREATE EXTENSION postgres_fdw; GRANT USAGE ON FOREIGN DATA WRAPPER postgres_fdw TO my_user; Then when doing the following from my_user : CREATE SERVER my_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (...); This error message is displayed: Query 1 ERROR: ERROR: foreign-data wrapper "postgres_fdw" does not exist Here is the list of currently active foreign data wrappers (from psql): postgres=# \dew List of foreign-data wrappers Name | Owner

Set-returning functions are not allowed in UPDATE when using Postgres 10

廉价感情. 提交于 2020-03-02 06:16:04
问题 We have an old Flyway database update UPDATE plays SET album = (regexp_matches(album, '^6,(?:(.+),)?tv\d+'))[1] ...that runs fine with any Postgres version from 9.2 to 9.6 but fails with latest Postgres 10. Happens even when ran directly without any JDBC. ERROR: set-returning functions are not allowed in UPDATE Is there a backwards incompatibility I didn't notice from version 10 release notes? Is there a workaround? 回答1: This is untested, but should work in all PostgreSQL versions: UPDATE

Single character text search alternative

戏子无情 提交于 2020-01-22 02:47:20
问题 Requirement: ensure single character ci text search over compound columns is processed in most efficient and performant way including relevance weight sorting; Having a table create table test_search (id int primary key, full_name varchar(300) not null, short_name varchar(30) not null); with 3 mln rows suggester api call sends queries to db starting from first input character and first 20 results ordered by relevance should be returned. Options/disadvantages: like lower() / ilike over '%c%' :