database-performance

Performance issues using H2 DB in embedded mode with heavy load of data in database

匆匆过客 提交于 2019-12-03 00:31:52
I am working a java application using H2 Database in embedded mode. My Application consumes 150mb of heap memory. Problem: Steps When I load H2 database with 2 mb of data, database access is fast and heap memory size 160mb. But When I load H2 database with 30 mb of data(h2 db file size =30 mb). Then accessing the database from my application is very slow. the reason being my application heap size is hugely grown to 300mb of size hence degraded performance. I confirmed using JConsole. So my understanding is since H2 database is developed using java and since I am using H2 database in embedded

Database partitioning - Horizontal vs Vertical - Difference between Normalization and Row Splitting?

空扰寡人 提交于 2019-12-02 14:30:11
I am trying to grasp the different concepts of Database Partitioning and this is what I understood of it: Horizontal Partitioning/Sharding : Splitting a table into different table that will contain a subset of the rows that were in the initial table (an example that I have seen a lot if splitting a Users table by Continent, like a sub table for North America, another one for Europe, etc...). Each partition being in a different physical location (understand 'machine'). As I understood it, Horizontal Partitioning and Sharding are the exact same thing(?). Vertical Partitioning : From what I

PostgreSQL CROSS JOIN indexing for performance

蹲街弑〆低调 提交于 2019-12-02 10:05:54
This is the second part of my question . So I have the following table, CREATE TABLE public.main_transaction ( id integer NOT NULL DEFAULT nextval('main_transaction_id_seq'::regclass), profile_id integer NOT NULL, request_no character varying(18), user_id bigint, ..... CONSTRAINT main_transaction_pkey PRIMARY KEY (id), CONSTRAINT fk_main_transaction_user_id FOREIGN KEY (user_id) REFERENCES public.jhi_user (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, REFERENCES public.main_profile (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED, CONSTRAINT

WHERE-CASE clause Subquery Performance

无人久伴 提交于 2019-12-02 05:12:19
问题 The question can be specific to SQL server. When I write a query such as : SELECT * FROM IndustryData WHERE Date='20131231' AND ReportTypeID = CASE WHEN (fnQuarterDate('20131231')='20131231') THEN 1 WHEN (fnQuarterDate('20131231')!='20131231') THEN 4 END; Does the Function Call fnQuarterDate (or any Subquery) within Case inside a Where clause is executed for EACH row of the table ? How would it be better if I get the function's (or any subquery) value beforehand inside a variable like:

Can MySql nested SP be a bottleneck?

牧云@^-^@ 提交于 2019-12-02 02:11:46
We have this MySQL SP, which calls a nested SP. It seems it does NOT perform well under load. It is possible that this SP becomes slow under load because it calls a nested SP and uses temporary tables to pass the data to main SP? DELIMITER $$ drop procedure if exists `GeoAreaFlattened_Select`; create procedure `GeoAreaFlattened_Select`( _areas MEDIUMTEXT, _comparisonGroup varchar(21844), _parentArea varchar(21844), _areaType varchar(21844) ) begin drop temporary table if exists areas; -- areas call CreateAreas(_areas, _comparisonGroup, _parentArea, _areaType); SELECT areas.ID, areas.Code,

Which database design gives better performance?

旧街凉风 提交于 2019-12-01 13:32:52
问题 I want to select to retrieve person and also further make some inserts, deletes and updates. If I want retrieve person who lives in Brazil what will be the best approach? Make 2 foreign key city and country in table person : Person(id, name, profession, **id_country**, **id_city**) cities (id, city, **id_country**) countries (id, country) or just one foreign key of cities in table person and a other foreign key county in table cities Person(id, name, profession, **id_city**) cities (id, city,

Postgres: Surprising performance on updates using cursor

一曲冷凌霜 提交于 2019-12-01 11:52:01
Consider the two following Python code examples, which achieves the same but with significant and surprising performance difference. import psycopg2, time conn = psycopg2.connect("dbname=mydatabase user=postgres") cur = conn.cursor('cursor_unique_name') cur2 = conn.cursor() startTime = time.clock() cur.execute("SELECT * FROM test for update;") print ("Finished: SELECT * FROM test for update;: " + str(time.clock() - startTime)); for i in range (100000): cur.fetchone() cur2.execute("update test set num = num + 1 where current of cursor_unique_name;") print ("Finished: update starting commit: " +

Oracle Autoincrement Functionality: Triggers or Oracle JDBC CallableStatement in 11.2?

烂漫一生 提交于 2019-12-01 11:24:08
Which is the best way (in terms of insert performance) to implement autoincrement functionality in Oracle (11.2) when you need to retrieved the newly generated key using JDBC? I know there are identity columns in Oracle 12, but I'm stuck with 11.2 right now. Like many others, I have had no luck in getting the JDBC getGeneratedKeys() to work with Oracle. I ended up having trigger in my Oracle (11.2) database that acts like a MySQL autoincrement function and inserts the NextVal from a table specific sequence to act as its primary key whenever there is an insert into that table. This made getting

MySQL Explain rows limit

别来无恙 提交于 2019-12-01 11:02:52
Below is my query to get 20 rows with genre_id 1. EXPLAIN SELECT * FROM (`content`) WHERE `genre_id` = '1' AND `category` = 1 LIMIT 20 I have total 654 rows in content table with genre_id 1, I have index on genre_id and in above query I am limiting result to display only 20 records which is working fine but explain is showing 654 records under rows, I tried to add index on category but still same result and then also I removed AND category = 1 but same rows count: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE content ref genre_id genre_id 4 const 654 Using where

Postgres: Surprising performance on updates using cursor

旧时模样 提交于 2019-12-01 09:39:04
问题 Consider the two following Python code examples, which achieves the same but with significant and surprising performance difference. import psycopg2, time conn = psycopg2.connect("dbname=mydatabase user=postgres") cur = conn.cursor('cursor_unique_name') cur2 = conn.cursor() startTime = time.clock() cur.execute("SELECT * FROM test for update;") print ("Finished: SELECT * FROM test for update;: " + str(time.clock() - startTime)); for i in range (100000): cur.fetchone() cur2.execute("update test