postgresql

Add interval to timestamp using Ecto Fragments

倖福魔咒の 提交于 2020-12-23 14:20:49
问题 I want to write the following query in a phoenix application using Ecto fragments: select * from ( select id, inserted_at + interval '1 day' * expiry as deadline from friend_referral_code ) t where localtimestamp at time zone 'UTC' > deadline The value of expiry is an integer value that represents number of days. What I've got so far is something like this: query = from frc in FriendReferralCode, where: fragment("localtimestamp at time zone 'UTC'") > fragment("? + INTERVAL '1' * ?", frc

Postgresql: 时间戳long,TimeStamp,Date,String互转

醉酒当歌 提交于 2020-12-23 10:58:12
PgAdmin窗口: Java窗口: 1. 时间戳Long转Timestamp select TO_TIMESTAMP(1512490630) as time from tablename; 2. TimeStamp转时间戳Long 转10位 SELECT EXTRACT(epoch FROM NOW()); SELECT EXTRACT(epoch FROM CAST(‘2017-12-06 00:17:10’ AS TIMESTAMP)); 转13位 转13位向下取整 SELECT EXTRACT(epoch FROM NOW())*1000,floor(EXTRACT(epoch FROM NOW())*1000); 4. String转Date 只能得到年月日,得不到时分秒,怪哉, 在这篇博文 里找到了答案,设计如此… select to_date(‘2020-08-28 12:55:05’) 5. TimeStamp 10位,13位 转String select to_char(to_timestamp(1512490630), ‘YYYY-MM-DD HH24:MI:SS’); SELECT to_char(to_timestamp(t.create_time / 1000), ‘YYYY-MM-DD HH24:MI:SS’); 10位转String SELECT to

Unable to connect to Postgress DB due to the authentication type 10 is not supported

感情迁移 提交于 2020-12-23 04:37:36
问题 I have recently tried my hands on Postgres. Installed it on local(PostgreSQL 13.0). Created a maven project and used Spring Data JPA, works just fine. Whereas when I tried using Gradle project, I am not able to connect to the DB and keep getting the following error. org.postgresql.util.PSQLException: The authentication type 10 is not supported. Check that you have configured the pg_hba.conf file to include the client's IP address or subnet, and that it is using an authentication scheme

Unable to connect to Postgress DB due to the authentication type 10 is not supported

一个人想着一个人 提交于 2020-12-23 04:34:37
问题 I have recently tried my hands on Postgres. Installed it on local(PostgreSQL 13.0). Created a maven project and used Spring Data JPA, works just fine. Whereas when I tried using Gradle project, I am not able to connect to the DB and keep getting the following error. org.postgresql.util.PSQLException: The authentication type 10 is not supported. Check that you have configured the pg_hba.conf file to include the client's IP address or subnet, and that it is using an authentication scheme

Does a varchar field's declared size have any impact in PostgreSQL?

拈花ヽ惹草 提交于 2020-12-22 22:32:34
问题 Is VARCHAR(100) any better than VARCHAR(500) from a performance point of view? What about disk usage? Talking about PostgreSQL today, not some database some time in history. 回答1: They are identical. From the PostgreSQL documentation: http://www.postgresql.org/docs/8.3/static/datatype-character.html Tip: There are no performance differences between these three types, apart from increased storage size when using the blank-padded type, and a few extra cycles to check the length when storing into

如何查找并下载rpm依赖包并使用yum离线安装rpm包

∥☆過路亽.° 提交于 2020-12-22 07:59:44
每一个成功人士的背后,必定曾经做出过勇敢而又孤独的决定。 放弃不难,但坚持很酷~ Linux版本:CentOS Linux release 7.3.1611 (Core) 一、需求 最近在工作中需要postgresql + postgis的离线安装。安装有两种方式: 源码编译 rpm包安装 源码编译耗费时间长,缺乏编译环境且生成目录位置不详,所以选择使用rpm包安装。但是我们最终目的是rpm包离线安装,目前不知道安装postgresql + postgis所依赖的rpm包有哪些,并且从网上找rpm包容易引起版本冲突啊,怎么办呢? 办法总比问题多,接着往下看。 二、在线安装 通过下载外部repo源的安装方式,我这里暂且称之为在线安装。 我们首先要使用在线安装的方式,成功安装postgresql + postgis,然后再考虑如何获取相关依赖rpm包的问题。请看具体命令: # 安装postgresql依赖的rpm包 rpm -ivh https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6-3.noarch.rpm # 安装postgis的依赖包 rpm -ivh https://mirrors.aliyun.com/epel/epel-release-latest

优化PostgreSQL Autovacuum

馋奶兔 提交于 2020-12-21 17:13:48
作者: Laurenz Albe是CYBERTEC的高级顾问和支持工程师。自2006年以来,他一直在PostgreSQL上工作并为PostgreSQL做贡献。 译者:类延良,任职于瀚高基础软件股份有限公司,PostgreSQL数据库技术爱好者,10g &11g OCM,OGG认证专家。 在许多PostgreSQL数据库中,您无需考虑或担心调整autovacuum。它会在后台自动运行,并在不妨碍您的情况下进行清理。 但是有时默认配置还不够好,您必须调整autovacuum以使其正常工作。本文介绍了一些典型的问题方案,并介绍了在这些情况下的处理方法。 autovacuum的任务 有许多 autovacuum的 配置参数,这会使调整变得复杂。主要原因是 autovacuum 具有许多不同的任务。从某种意义上说,autovacuum必须解决由PostgreSQL的多版本并发控制(MVCC)实现引起的所有问题: 清理UPDATE或DELETE操作后留下的“死元组” 更新可用空间映射 (free space map) ,以跟踪表块中的可用空间 更新仅索引扫描所需的可见性图 (visibility map) “冻结” (freeze) 表行,以便事务ID计数器可以安全地环绕 根据这些功能中的哪个会导致问题,您需要不同的方法来调整 autovacuum。 调整 autovacuum 以清除死元组

【原创】kubernetes部署高可用Harbor

江枫思渺然 提交于 2020-12-21 07:52:20
##前言 本文Harbor高可用依照 Harbor官网 部署,主要思路如下,大家可以根据具体情况选择搭建。 部署Postgresql高可用集群。(本文选用Stolon进行管理,请查看文章《 kubernetes下Stolon部署高可用Postgresql 》) 部署redis高可用集群。(本文选用Helm对redis进行高可用部署,请查看文章《 kubernetes部署高可用redis 》,该文以整理好redis编排文件可直接使用) 部署Harbor高可用集群。(本文主要阐述Harbor的高可用部署,为《 kubernetes搭建Harbor无坑及Harbor仓库同步 》补充部分,请先行阅读) ##一、Harbor部署前准备 本文仅说明高可用配置,其余部署请查看《 kubernetes搭建Harbor无坑及Harbor仓库同步 》 ###.安装方式 helm安装 直接使用博主整理好的编排文件安装(通过Helm生成) ####1.helm安装 安装Helm请查看《 kubernetes搭建Harbor无坑及Harbor仓库同步 》,其中包含Helm安装。 #####1.1.下载 harbor-helm git clone https://github.com/goharbor/harbor-helm.git cd XXX/harbor-helm #####1.2.修改value

postgresql删除用户提示ERROR: role postgres1 cannot be dropped because some objects depend on it

余生长醉 提交于 2020-12-20 11:10:23
作者:瀚高PG实验室 (Highgo PG Lab)- 徐云鹤 删除用户可以使用如下命令: drop user postgres1 ; 如果提示如下内容则说明该用户下有所属对象。 postgres = # drop user postgres1 ; ERROR: role "postgres1" cannot be dropped because some objects depend on it 需要通过如下两条命令进行删除。 drop owned by postgres1 cascade ; drop user postgres1 ; 将想删除的用户名替换postgres1即可。 删除前确保连接的数据库正确,执行删除命令前需三思而后行~~ 截至PG13,没有 drop user postgres1 cascade; 命令。以后应该也不会有。 来源: oschina 链接: https://my.oschina.net/u/4264465/blog/4816839

Getting COUNT from sqlalchemy

喜你入骨 提交于 2020-12-20 08:10:50
问题 I have: res = db.engine.execute('select count(id) from sometable') The returned object is sqlalchemy.engine.result.ResultProxy . How do I get count value from res ? Res is not accessed by index but I have figured this out as: count=None for i in res: count = res[0] break There must be an easier way right? What is it? I didn't discover it yet. Note: The db is a postgres db. 回答1: While the other answers work, SQLAlchemy provides a shortcut for scalar queries as ResultProxy.scalar(): count = db