Can PostgreSQL 9.1 leak locks? (out of shared memory/increase max_pred_locks_per_transaction)

南笙酒味 提交于 2019-12-23 22:54:47

问题


We recently upgraded to postgresql 9.1.6 (from 8.3). Our test server indicated that the max_pred_locks_per_transaction should be set at least as high as 900 (which is way beyond the recommended setting of 64).

We're now in production, and I've had to increase this parameter many times, as our log will start filling with:

ERROR:  53200: out of shared memory
HINT:  You might need to increase max_pred_locks_per_transaction.

With a client connection setting of 600 (but a pooling system that never goes over 100 clients):

max_pred_locks_per_transaction: We went to 3000. Ran out in about a day. Went to 9000, ran out in about 3 days.

I now have it set to 30000, and since this number is the average allocated per allowed client connection, I now have around 5 GB of shared memory dedicated to lock space!

I do have shared_buffers set rather high (24GB at the moment), which is over the 40% RAM figure. (I plan to tune this down to about 25% of RAM at the next restart).

EDIT: This tuning turned out to be a bad idea. My database is it with a lot of heavy queries, and having half of a large RAM dedicated to shared_buffers keeps it from choking, as it can cache the larger tables completely.

On average, I see somewhere around 5-10 active queries at a time. Our query load far outstrips our update load.

Anybody care to tell me how I might track down what is going wrong here? With such a small update set, I really can't figure out why we are running out of locks so often...it really does smell like a leak to me.

Anyone know how to examine where the locks are going? (e.g. how might I read the content of pg_locks with respect to this issue)


回答1:


This sounds like it is likely to be caused by a long-running transaction. Predicate locks for one transaction cannot be released until all overlapping read-write transactions complete. This includes prepared transactions.

Take a look at both pg_stat_activity and pg_prepared_xacts for any transactions which started (or were prepared) more than a few minutes ago.

The only other probable, non-bug explanation I can think of is that you have tables with hundreds or thousands of partitions.

If neither of these explanations makes sense, I would love to get my hands on a reproducible test case. Is there any way to create tables, populate them with queries using generate_series() and make this happen in a predictable way? With such a test case I can definitely track down the cause.




回答2:


According to http://www.progtown.com/topic1203868-error-out-of-shared-memory.html it might make sense to reduce work_mem configuration parameter.

See also https://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-down for additional details.



来源:https://stackoverflow.com/questions/12946715/can-postgresql-9-1-leak-locks-out-of-shared-memory-increase-max-pred-locks-per

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!