postgresql-9.1

How are import statements in plpython handled?

我们两清 提交于 2019-11-30 04:54:25
I have a plypython function which does some json magic. For this it obviously imports the json library. Is the import called on every call to the function? Are there any performance implication I have to be aware of? The import is executed on every function call. This is the same behavior you would get if you wrote a normal Python module with the import statement inside a function body as oppposed to at the module level. Yes, this will affect performance. You can work around this by caching your imports like this: CREATE FUNCTION test() RETURNS text LANGUAGE plpythonu AS $$ if 'json' in SD:

to_char(number) function in postgres

你说的曾经没有我的故事 提交于 2019-11-30 04:42:50
i want to display/convert a number to character (of it's same length) using to_char() function . In oracle i can write like SELECT to_char(1234) FROM DUAL But in postgres SELECT to_char(1234) is not working. You need to supply a format mask. In PostgreSQL there is no default: select to_char(1234, 'FM9999'); If you don't know how many digits there are, just estimate the maximum: select to_char(1234, 'FM999999999999999999'); If the number has less digits, this won't have any side effects. If you don't need any formatting (like decimal point, thousands separator) you can also simply cast the

Postgres SELECT … FOR UPDATE in functions

不打扰是莪最后的温柔 提交于 2019-11-30 04:23:34
I have two questions about using SELECT … FOR UPDATE row-level locking in a Postgres function: Does it matter which columns I select? Do they have any relation to what data I need to lock and then update? SELECT * FROM table WHERE x=y FOR UPDATE; vs SELECT 1 FROM table WHERE x=y FOR UPDATE; I can't do a select in a function without saving the data somewhere, so I save to a dummy variable. This seems hacky; is it the right way to do things? Here is my function: CREATE OR REPLACE FUNCTION update_message(v_1 INTEGER, v_timestamp INTEGER, v_version INTEGER) RETURNS void AS $$ DECLARE v_timestamp

now() default values are all showing same timestamp

。_饼干妹妹 提交于 2019-11-30 02:52:47
I have created my tables with a column (type: timestamp with timezone) and set its default value to now() ( current_timestamp() ). I run a series of inserts in separate statements in a single function and I noticed all the timestamps are equal down to the (ms), is the function value somehow cached and shared for the entire function call or transaction? That is expected and documented behaviour: From the manual: Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single

cannot create extension without superuser role

流过昼夜 提交于 2019-11-29 22:50:40
I'm trying to run unit tests in Django, and it creates a new database. The database has postgis extensions and when I regularly create the database, I use "CREATE ExTENSION postgis". However, when I run tests, it gives me the following error: $ ./manage.py test Creating test database for alias 'default'... Got an error creating the test database: database "test_project" already exists Type 'yes' if you would like to try deleting the test database 'test_project', or 'no' to cancel: yes Destroying old test database 'default'... DatabaseError: permission denied to create extension "postgis" HINT:

How do I convert an integer to string as part of a PostgreSQL query?

走远了吗. 提交于 2019-11-29 22:44:21
How do I convert an integer to string as part of a PostgreSQL query? So, for example, I need: SELECT * FROM table WHERE <some integer> = 'string of numbers' where <some integer> can be anywhere from 1 to 15 digits long. Because the number can be up to 15 digits, you'll meed to cast to an 64 bit (8-byte) integer. Try this: SELECT * FROM table WHERE myint = mytext::int8 The :: cast operator is historical but convenient. Postgres also conforms to the SQL standard syntax myint = cast ( mytext as int8) If you have literal text you want to compare with an int , cast the int to text: SELECT * FROM

Postgresql batch insert or ignore

廉价感情. 提交于 2019-11-29 22:06:04
问题 I have the responsibility of switching our code from sqlite to postgres. One of the queries I am having trouble with is copied below. INSERT INTO group_phones(group_id, phone_name) SELECT g.id, p.name FROM phones AS p, groups as g WHERE g.id IN ($add_groups) AND p.name IN ($phones); The problem arises when there is a duplicate record. In this table the combination of both values must be unique. I have used a few plpgsql functions in other places to do update-or-insert operations, but in this

Drop column doesn't remove column references entirely - postgresql

对着背影说爱祢 提交于 2019-11-29 18:50:37
I have a table that contained 1600 columns and would like to add more fields in that but as per the database rule the more fields not allowed to be created because the higher limit reached. So I decided to drop few unwanted fields from the table and I did it. Again I tried to add few fields in that table but it's raise the same error that 1600 columns are there you can't add more. I gone through other tables of postgresql " pg_attribute " and all those fields are there and having delete parameter = True. What I have tried so far Drop Constraints Take table data into another table Truncate

Hibernate startup very slow

半城伤御伤魂 提交于 2019-11-29 16:43:55
问题 For some reason, the startup of my hibernate application is unbarrably slow. (up to 2 min) I have been thinking that the c3p0 configuration is plain wrong (related question) but studying the logs shows, that there is no activity just after the connection to the server is established. Also, using the built-in polling capabilities of Hibernate shows the same result. Here is a snippet from the logs: 20:06:51,248 DEBUG BasicResourcePool:422 - decremented pending_acquires: 0 20:06:51,248 DEBUG

Update a table with a trigger after update

╄→尐↘猪︶ㄣ 提交于 2019-11-29 16:01:10
I have two tables batch (batch_id,start_date,end_date,batch_strength,is_locked) sem (user_id,is_active,no_of_days) I have executed the trigger procedure given below then update the table using query CREATE OR REPLACE FUNCTION em_batch_update() RETURNS trigger AS $em_sem_batch$ BEGIN UPDATE batch set is_locked='TRUE' where (start_date + (select no_of_days from sem WHERE is_active='TRUE' and user_id='OSEM') ) <= current_date; return NEW; END; $em_sem_batch$ LANGUAGE plpgsql; CREATE TRIGGER em_sem_batch BEFORE UPDATE ON batch FOR EACH ROW EXECUTE PROCEDURE em_batch_update(); update em_batch set