query-performance

Performance of Delta E (CIE Lab) calculating and sorting in SQL

杀马特。学长 韩版系。学妹 提交于 2019-12-03 00:39:59
I have a database table where each row is a color. My goal: given an input color, calculate its distance to each color in the DB table, and sort the results by that distance. Or, stated as a user story: when I choose a color, I want to see a list of the colors that are most similar to the one that I picked, with the closest matches at the top of the list. I understand that, in order to do this, the various Delta E (CIE Lab) formulae are the best choice . I wasn't able to find any native SQL implementations of the formulae, so I wrote my own SQL versions of Delta E CIE 1976 and Delta E CIE 2000

Changing from varchar to mediumtext causes performance degradation

我与影子孤独终老i 提交于 2019-12-02 22:09:39
问题 I have a table which stores product reviews for a website. The table uses varchar(1000) to store the review comment average response time is 0.5 seconds. I changed the datatype of the column that holds data to mediumtext and the page response time jumps to 1.5 - 2 seconds. Baring in mind no additional data was added to the column and the PHP code is the same. I don't think the query time is the issue, as MySQL reports it takes 0.019secs, which is the same whether varchar or mediumtext. I'm at

Why is query with phone = N'1234' slower than phone = '1234'?

*爱你&永不变心* 提交于 2019-12-02 18:44:05
I have a field which is a varchar(20) When this query is executed, it is fast (Uses index seek): SELECT * FROM [dbo].[phone] WHERE phone = '5554474477' But this one is slow (uses index scan). SELECT * FROM [dbo].[phone] WHERE phone = N'5554474477' I am guessing that if I change the field to an nvarchar, then it would use the Index Seek. Martin Smith Because nvarchar has higher datatype precedence than varchar so it needs to perform an implicit cast of the column to nvarchar and this prevents an index seek. Under some collations it is able to still use a seek and just push the cast into a

SQLite: Should LIKE 'searchstr%' use an index?

浪子不回头ぞ 提交于 2019-12-02 18:14:11
I have a DB with several fields word_id — INTEGER PRIMARY_KEY word — TEXT ... ..and ~150k rows. Since this is a dictionary, I'm searching for a word with mask 'search_string%' using LIKE. It used to work just fine, taking 15ms to find matching rows. The table has an index for a field 'word' . Recently I've modified the table (some fields of that table which are out of the scope) and something happened — it's taking 400ms to execute query, so I understand that as it fails to use index now. Straightforward query with = instead of like shows 10ms result. Does someone have an idea what's happening

Does wildcard in left-most column of composite index mean remaining columns in index aren't used in index lookup (MySQL)?

£可爱£侵袭症+ 提交于 2019-12-02 07:42:42
Imagine you have a primary composite index of last_name,first_name . Then you performed a search of WHERE first_name LIKE 'joh%' AND last_name LIKE 'smi%' . Does the wildcard used in the last_name condition mean that the first_name condition will not be used in further helping MySQL find indexes? In other words, by putting a wildcard on the last_name condition MySQL will only do a partial index lookup (and ignores conditions given in the columns that are to the right of last_name)? Further clarification of what I'm asking Example-1: Primary key is last_name, first_name . Example-2: Primary key

How to: Change actual execution method from “row” to “batch” - Azure SQL Server

*爱你&永不变心* 提交于 2019-12-02 07:40:32
问题 I am having some major issues. When inserting data into my database, I am using an INSTEAD OF INSERT trigger which performs a query. On my TEST database, this query takes much less than 1 second for insert of a single row. In production however, this query takes MUCH longer (> 30 seconds for 1 row). When comparing the Execution plans for both of them, there seems to be some CLEAR differences: Test has: "Actual Execution Method: Batch" Prod has: "Actual Execution Method: Row" Test has: "Actual

Should I sacrifice my innodb_buufer_pool _size/RAM to make space for query_cache_size ?

巧了我就是萌 提交于 2019-12-02 05:43:50
I have a 16GB dedicated Mysql server database.My innodb_buffer_pool_size is set to around 11GB ,i am implementing query cache in my system ,which has a size of 80mb. From where should i make this space ,innodb_buffer_pool_size or RAM ? RolandoMySQLDBA Back in Jun 2014 I answered https://dba.stackexchange.com/questions/66774/why-query-cache-type-is-disabled-by-default-start-from-mysql-5-6/66796#66796 In that post, I discussed how InnoDB micromanages changes between the InnoDB Buffer Pool and the Query Cache. NOT USING THE QUERY CACHE The simplest answer would be to just disable the query cache,

How to: Change actual execution method from “row” to “batch” - Azure SQL Server

*爱你&永不变心* 提交于 2019-12-02 03:50:21
I am having some major issues. When inserting data into my database, I am using an INSTEAD OF INSERT trigger which performs a query. On my TEST database, this query takes much less than 1 second for insert of a single row. In production however, this query takes MUCH longer (> 30 seconds for 1 row). When comparing the Execution plans for both of them, there seems to be some CLEAR differences: Test has: "Actual Execution Method: Batch" Prod has: "Actual Execution Method: Row" Test has: "Actual number of rows: 1" Prod has: "Actual number of rows 92.000.000" Less than a week ago production was

How to improve this MySQL Query using join?

故事扮演 提交于 2019-12-02 02:23:57
I have got a simple query and it takes more than 14 seconds. select e.title, e.date, v.name, v.city, v.region, v.country from seminar e force index for join (venueid) left join venues v on e.venueid = v.id where v.country = 'US' and v.city = 'New York' and v.region = 'NY' and e.date > curdate() and e.someid != 0 Note: count(e.id) stands for an abbreviation for debugging purposes. In fact we get information from both tables. Explain gives this: +----+-------------+-------+-------------+--------------------------------------------------------------------------------------+-----------------------

Switching from FOR loops in plpgsql to set-based SQL commands

让人想犯罪 __ 提交于 2019-12-02 01:16:58
I've got quite heavy query with FOR loop to rewrite and would like to do it simpler, using more SQL instead of plpgsql constructions. The query looks like: FOR big_xml IN SELECT unnest(xpath('//TAG1', my_xml)) LOOP str_xml = unnest(xpath('/TAG2/TYPE/text()', big_xml)); FOR single_xml IN SELECT unnest(xpath('/TAG2/single', big_xml)) LOOP CASE str_xml::INT WHEN 1 THEN INSERT INTO tab1(id, xml) VALUES (1, single_xml); WHEN 2 THEN INSERT INTO tab2(id, xml) VALUES (1, single_xml); WHEN 3 [...] WHEN 11 [...] ELSE RAISE EXCEPTION 'something' END CASE; END LOOP; END LOOP; RETURN xmlelement(NAME "out",