postgresql-9.1

Endless loop in trigger function

六眼飞鱼酱① 提交于 2019-12-02 22:41:22
问题 This is a trigger that is called by either an insert, update or a delete on a table. It is guaranteed the calling table has all the columns impacted and a deletes table also exists. CREATE OR REPLACE FUNCTION sample_trigger_func() RETURNS TRIGGER AS $$ DECLARE operation_code char; table_name varchar(50); delete_table_name varchar(50); old_id integer; BEGIN table_name = TG_TABLE_NAME; delete_table_name = TG_TABLE_NAME || '_deletes'; SELECT SUBSTR(TG_OP, 1, 1)::CHAR INTO operation_code; IF TG

how to update the new module in openerp 7 in ubuntu 12.0? [closed]

▼魔方 西西 提交于 2019-12-02 22:40:48
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 5 years ago . Done,all the possible ways for updating the new module in openerp 7 in ubuntu 12.0. Is there any other way to update the new module in openerp 7 in ubuntu 12.0 ? can anyone help me.. 回答1: Put your module under addons/ directory restart your server Go to OpenERP Menu Setting -> Modules -> Update Modules List and

Inserting Large Object into Postgresql returns 53200 Out of Memory error

微笑、不失礼 提交于 2019-12-02 21:26:00
问题 Postgresql 9.1 NPGSQL 2.0.12 I have binary data I am wanting to store in a postgresql database. Most files load fine, however, a large binary (664 Mb) file is causing problems. When trying to load the file to postgresql using Large Object support through Npgsql, the postgresql server returns 'out of memory' error. I'm running this at present on a workstation with 4Gb RAM, with 2Gb free with postgresql running in an idle state. This is the code I am using, adapted from PG Foundry Npgsql User's

How do I save one piece of data in two databases using OpenERP?

五迷三道 提交于 2019-12-02 20:21:29
问题 When I want to edit code in OpenERP, is it possible to save one piece of data in two databases where the fields have the same names in both tables. 回答1: Yes it is quite possible if you know the database but let me warn you this is very bad idea cause it is very risky and all rule of Resistance layer (ORM) will be violated. 回答2: What are you actually trying to do?? If you are trying to synchronize, then use the module base_synchro . The module is not complete. You have to do your own

Nearest places from a certain point

廉价感情. 提交于 2019-12-02 18:36:23
I have the following table create table places(lat_lng point, place_name varchar(50)); insert into places values (POINT(-126.4, 45.32), 'Food Bar'); What should be the query to get all places close to particular lat/long? gis is installed. If you actually wanted to use PostGIS: create table places( lat_lng geography(Point,4326), place_name varchar(50) ); -- Two ways to make a geography point insert into places values (ST_MakePoint(-126.4, 45.32), 'Food Bar1'); insert into places values ('POINT(-126.4 45.32)', 'Food Bar2'); -- Spatial index create index places_lat_lng_idx on places using gist

Cannot connect to Postgres running on VM from host machine using MD5 method

泪湿孤枕 提交于 2019-12-02 18:31:19
I have a VM set up with Vagrant that has Postgres running on it (on port 5432), forwarded to port 8280 on the host machine. I have set the password for the default user and I can connect locally just fine. I have been trying to set up access from the host machine over port 8280, and I have been unable to get it working with 'MD5' as the trust method. I have set up postgresql.conf to listen on all addresses: # postgresql.conf listen_addresses = '*' and I have configured pg_hab.conf as follows: # pg_hab.conf #TYPE DATABASE USER CIDR-ADDRESS METHOD host all all 0.0.0.0/0 md5 With all of these

Postgres ENUM data type or CHECK CONSTRAINT?

懵懂的女人 提交于 2019-12-02 17:16:20
I have been migrating a MySQL db to Pg (9.1), and have been emulating MySQL ENUM data types by creating a new data type in Pg, and then using that as the column definition. My question -- could I, and would it be better to, use a CHECK CONSTRAINT instead? The MySQL ENUM types are implemented to enforce specific values entries in the rows. Could that be done with a CHECK CONSTRAINT? and, if yes, would it be better (or worse)? punkish Based on the comments and answers here, and some rudimentary research, I have the following summary to offer for comments from the Postgres-erati. Will really

Strange PostgreSQL “value too long for type character varying(500)”

匆匆过客 提交于 2019-12-02 17:03:18
I have a Postgres schema which looks like: The problem is that whenever I save text longer than 500 characters in the description column I get the error: value too long for type character varying(500) In the documentation for Postgres it says type text can have unlimited characters. I'm using postgresql-9.1. This table has been generated using Django 1.4 and the field type in the model is TextField, if that helps explain the problem further. Any ideas as why this is happening and what I can do to fix it? By specifying the column as VARCHAR(500) you've set an explicit 500 character limit. You

Select query with offset limit is too much slow

爷,独闯天下 提交于 2019-12-02 16:51:26
I have read from internet resources that a query will be slow when the offset increases. But in my case I think its too much slow. I am using postgres 9.3 Here is the query ( id is primary key): select * from test_table offset 3900000 limit 100; It returns me data in around 10 seconds . And I think its too much slow. I have around 4 million records in table. Overall size of the database is 23GB . Machine configuration: RAM: 12 GB CPU: 2.30 GHz Core: 10 Few values from postgresql.conf file which I have changed are as below. Others are default. shared_buffers = 2048MB temp_buffers = 512MB work

PostgreSQL date() with timezone

烈酒焚心 提交于 2019-12-02 16:34:11
I'm having an issue selecting dates properly from Postgres - they are being stored in UTC, but not converting with the Date() function properly. Converting the timestamp to a date gives me the wrong date if it's past 4pm PST. 2012-06-21 should be 2012-06-20 in this case. The starts_at column datatype is timestamp without time zone . Here are my queries: Without converting to PST timezone: Select starts_at from schedules where id = 40; starts_at --------------------- 2012-06-21 01:00:00 Converting gives this: Select (starts_at at time zone 'pst') from schedules where id = 40; timezone ---------