query-optimization

PostgreSQL manualy change query execution plan to force using sort and sequential access instead of full scan

我与影子孤独终老i 提交于 2019-12-08 11:45:11
问题 I have simple query like this: SELECT * FROM t1 WHERE f1 > 42 AND f2 = 'foo' AND f3 = 'bar' ORDER BY f4 DESC LIMIT 10 OFFSET 100; I have index for field f4 (for other queries). Condition "f1 > 42 AND f2 = 'foo' AND f3 = 'bar'" is not representative and corresponds to 70% of records in table t1. It's about 2 000 000 records in table and it is growing up every day. Query plan explanation for this query shows using of seq scan by entire table and then performing ordering and limitation. Is it

Running Total until specific condition is true

我只是一个虾纸丫 提交于 2019-12-08 10:32:20
问题 I'm having a table representing the dealers cards and their rank. I'm now trying to make a query (as fast as possible) to set status on the game. (As said before, only the dealer cards is shown) W = Win S = Stand L = Loss B = Blackjack (in two cards) About the rules: The dealer wins at 21, if it's in two cards its blackjack. If the rank is between 17 and 20 it's S = stand. Over 21 is a loss. Ranks: 1 (ACE) - 1 or 11 rank. Counted as 11. 2-10 - 2-10 rank 11-13 (knight - king) - 10 rank ╔════╦═

BaseX query optimization on join

放肆的年华 提交于 2019-12-08 08:11:55
问题 After the issue in the following Stackoverflow is fixed, I have another problem when I try to make a join as below. The last query takes about 250ms while the first two take only 16ms. Is there a better way to perform join between two items? Note: You can find the test data from this link. let $PlGeTys := /root/PlGeTys/PlGeTy[ isOfPlCt/@href=/root/PlCts/PlCt[ environment='AIR' ]/@id ] let $PlSpTys := /root/PlSpTys/PlSpTy[ isOfPlGeTy/@href=$PlGeTys/@id ] for $PlGeTy in $PlGeTys, $PlSpTy in

SQLite table taking time to fetch the records in LIKE query

僤鯓⒐⒋嵵緔 提交于 2019-12-08 07:44:41
问题 Scenario : database is sqlite (need to encrypt records in database. Hence used SQL cipher API for iOS) There is a table in the database named partnumber with schema as follows: CREATE TABLE partnumber ( objid varchar PRIMARY KEY, description varchar, make varchar, model varcha, partnumber varchar, SSOKey varchar, PMOKey varchar ) This table contains approximately 80K records. There are 3 text fields in the UI view, in which user can enter search terms and searching is made as soon as user

Join multiple tables by multiple grouping

杀马特。学长 韩版系。学妹 提交于 2019-12-08 06:35:56
问题 We have a passing control system and every pass action is stored Event table in MSSQL Server . We want to join multiple tables with the Event table according to their relations as shown on the image below. However, I am not sure if the grouping approach that I used is correct or not because the query takes a lot of time. Could you please clarify me oh how to join these tables by multiple grouping? Here is the JOIN clause I used: SELECT t.CardNo, t.EventTime, t1.EmployeeName, t1.Status, t2

Better query strategy to sort files by file hash frequency and file size

杀马特。学长 韩版系。学妹 提交于 2019-12-08 06:17:19
问题 I've wrote this query without much thought but as a beginner I'm almost sure it could be written better. Here it's: SELECT filehash, filename, filesize, group_files FROM files INNER JOIN ( SELECT filehash group_id, COUNT(filehash) group_files FROM files GROUP BY filehash) groups ON files.filehash = groups.group_id ORDER BY group_files DESC, filesize DESC Table definition: CREATE TABLE files (fileid INTEGER PRIMARY KEY AUTOINCREMENT, filename TEXT, filesize INTEGER, filehash TEXT) Indexes

Optimizing row by row (cursor) processing in Oracle 11g

邮差的信 提交于 2019-12-08 05:36:05
问题 I have to process a large table (2.5B records) row by row in order to keep track of two variables. As one can imagine, this is quite slow. I am looking for ideas on how to tune this procedure. Thank you. declare cursor c_data is select /* +index(data data_pk) */ * from data order by data_id; r_data c_data%ROWTYPE; lst_b_prc number(15,8); lst_a_prc number(15,8); begin open c_data; loop fetch c_data into r_data; exit when c_data%NOTFOUND; if r_data.BATS = 'B' then lst_b_prc := r_data.PRC; end

Is this a good design for an audit table with tons of records?

泄露秘密 提交于 2019-12-08 05:08:22
问题 I have a table that tracks inventory data by each individual piece. This is a simplified version of the table (some non-key fields are excluded): UniqueID, ProductSKU, SerialNumber, OnHandStatus, Cost, DateTimeStamp Every time something happens to a given piece, a new audit record is created. For example, the first time my product ABC gets added to inventory I get a record like this: 1, ABC, 555, OnHand, $500, 01/01/2009 @ 02:05:22 If the cost of ABC serial number 555 changes, I get a new

String to Value compare Optimizing MySQL Query

旧城冷巷雨未停 提交于 2019-12-08 03:56:31
问题 My problem is the following: I have two arrays $first and $second of the same length, containing strings. Every string is given a positive value in a table named Fullhandvalues : Field: board : string(7) PRIMARY KEY Field: value : int (11) I want to count how many times $first[$i] has a better value than $second[$i], how many times they have the same value, and how many times $first[$i] has a worse value than $second[$i]. What I have done now is getting all the values via $values[0]= DB:

Speed up Django & Postgres with simple JSON field

荒凉一梦 提交于 2019-12-08 02:25:37
问题 I have a very very complex model with lots of related models by FK and M2M which are also have lots of relations, etc. So, rendering a list of such objects is a very expensive SQL operation, and i want to optimise it. (select_related and prefetch_related help, but a little) I have maybe a very stupid but very simple idea - define save method, that will serialize all object's data to a field stores JSON To do something like this: class VeryComplexModel(models.Model): # some_field # some_field