query-performance

does it worth switching a PRIMARY KEY from the type NVARCHAR to the type INT?

☆樱花仙子☆ 提交于 2019-12-01 21:34:59
问题 On our SQL SERVER 2008 R2 database we have an COUNTRIES referential table that contains countries. The PRIMARY KEY is a nvarchar column: create table COUNTRIES( COUNTRY_ID nvarchar(50) PRIMARY KEY, ... other columns ) The primary key contains values like 'FR', 'GER', 'US', 'UK', etc. This table contains max. 20 rows. We also have a SALES table containing sales data: create table SALES( ID int PRIMARY KEY COUNTRY_ID nvarchar(50), PRODUCT_ID int, DATE datetime, UNITS decimal(18,2) ... other

Why does a slight change in the search term slow down the query so much?

百般思念 提交于 2019-12-01 19:11:41
问题 I have the following query in PostgreSQL (9.5.1): select e.id, (select count(id) from imgitem ii where ii.tabid = e.id and ii.tab = 'esp') as imgs, e.ano, e.mes, e.dia, cast(cast(e.ano as varchar(4))||'-'||right('0'||cast(e.mes as varchar(2)),2)||'-'|| right('0'||cast(e.dia as varchar(2)),2) as varchar(10)) as data, pl.pltag, e.inpa, e.det, d.ano anodet, coalesce(p.abrev,'')||' ('||coalesce(p.prenome,'')||')' determinador, d.tax, coalesce(v.val,v.valf)||' '||vu.unit as altura, coalesce(v1.val

does it worth switching a PRIMARY KEY from the type NVARCHAR to the type INT?

坚强是说给别人听的谎言 提交于 2019-12-01 18:22:55
On our SQL SERVER 2008 R2 database we have an COUNTRIES referential table that contains countries. The PRIMARY KEY is a nvarchar column: create table COUNTRIES( COUNTRY_ID nvarchar(50) PRIMARY KEY, ... other columns ) The primary key contains values like 'FR', 'GER', 'US', 'UK', etc. This table contains max. 20 rows. We also have a SALES table containing sales data: create table SALES( ID int PRIMARY KEY COUNTRY_ID nvarchar(50), PRODUCT_ID int, DATE datetime, UNITS decimal(18,2) ... other columns ) This sales table contains a column named COUNTRY_ID , also of type nvarchar (not a primary key).

Multi Thread in SQL?

馋奶兔 提交于 2019-12-01 08:50:24
I have a SQL query Like SELECT Column1, Column2, Column3, **ufn_HugeTimeProcessFunction**(Column1, Column2, @Param1) As Column4 From Table1 This ufn_HugeTimeProcessFunction function run against large table (in terms of number of rows) and there are several calculation behind to return value. Am I able to force the SQL compiler to run that function in another thread (process)? Edited : Basically that function get the data from 3 different databases. That's why I am planing to run it "in parallel", moreover it is not possible to change the indexes on the other databases If the server computer on

Multi Thread in SQL?

百般思念 提交于 2019-12-01 07:48:38
问题 I have a SQL query Like SELECT Column1, Column2, Column3, **ufn_HugeTimeProcessFunction**(Column1, Column2, @Param1) As Column4 From Table1 This ufn_HugeTimeProcessFunction function run against large table (in terms of number of rows) and there are several calculation behind to return value. Am I able to force the SQL compiler to run that function in another thread (process)? Edited : Basically that function get the data from 3 different databases. That's why I am planing to run it "in

MySQL: Select query execution and result fetch time increases with number of connections

戏子无情 提交于 2019-12-01 06:26:38
My server application makes multiple connections to MySQL through separate threads. Each connection fires a SELECT query and fetches result which the application then caters back to its connected users. I am using InnoDB. To my surprise I found it a very weird that if I increase number of connections to MySQL, query performance deteriorates and result fetch time also increases. Below is a table showing same. This data is produced when I had 3333 records in MySQL table and the SELECT query based on random parameters given to it fetches around 450 records out of them. Each record has around 10

MySQL query with JOIN not using INDEX

…衆ロ難τιáo~ 提交于 2019-12-01 05:10:57
问题 I have the following two tables in MySQL (Simplified). clicks (InnoDB) Contains around about 70,000,000 records Has an index on the date_added column Has a column link_id which refers to a record in the links table links (MyISAM) Contains far fewer records, around about 65,000 I'm trying to run some analytical queries using these tables. I need to pull out some data, about clicks that occurred inside of two specified dates while applying some other user selected filters using other tables and

Why isn't my PostgreSQL array index getting used (Rails 4)?

大兔子大兔子 提交于 2019-11-29 16:59:40
I've got a PostgreSQL array of strings as a column in a table. I created an index using the GIN method. But ANY queries won't use the index (instead, they're doing a sequential scan of the whole table with a filter). What am I missing? Here's my migration: class CreateDocuments < ActiveRecord::Migration def up create_table :documents do |t| t.string :title t.string :tags, array: true, default: [] t.timestamps end add_index :documents, :tags, using: 'gin' (1..100000).each do |i| tags = [] tags << 'even' if (i % 2) == 0 tags << 'odd' if (i % 2) == 1 tags << 'divisible by 3' if (i % 3) == 0 tags

Execute multiple functions together without losing performance

被刻印的时光 ゝ 提交于 2019-11-29 09:43:15
I have this process that has to make a series of queries, using pl/pgsql: --process: SELECT function1(); SELECT function2(); SELECT function3(); SELECT function4(); To be able to execute everything in one call, I created a process function as such: CREATE OR REPLACE FUNCTION process() RETURNS text AS $BODY$ BEGIN PERFORM function1(); PERFORM function2(); PERFORM function3(); PERFORM function4(); RETURN 'process ended'; END; $BODY$ LANGUAGE plpgsql The problem is, when I sum the time that each function takes by itself, the total is 200 seconds, while the time that the function process() takes

How to FULL OUTER JOIN multiple tables in MySQL

别等时光非礼了梦想. 提交于 2019-11-29 07:59:56
I need to FULL OUTER JOIN multiple tables. I know how to FULL OUTER JOIN two tables from here . But I have several tables, and I can't apply it over them. How can I achieve it? My SQL code, below: INSERT INTO table ( customer_id ,g01 ,g02 ,g03 ,has_card ,activity ) SELECT sgd.customer_id, sgd.g01,sgd.g02,sgd.g03,sc.value, a.activity FROM s_geo_data sgd LEFT JOIN s_category sc ON sc.customer_id = sgd.customer_id UNION SELECT sgd.customer_id, sgd.g01,sgd.g02,sgd.g03,sc.value, a.activity FROM s_geo_data sgd RIGHT JOIN s_category sc ON sc.customer_id = sgd.customer_id UNION SELECT sgd.customer_id,