query-optimization

MySQL innoDB: Long time of query execution

大憨熊 提交于 2019-12-13 02:14:06
问题 I'm having troubles to run this SQL: I think it's a index problem but I don't know because I dind't make this database and I'm just a simple programmer. The problem is, that table has 64260 records, so that query gets crazy when executing, I have to stop mysql and run again because the computer get frozen. Thanks. EDIT: table Schema CREATE TABLE IF NOT EXISTS `value_magnitudes` ( `id` int(11) NOT NULL AUTO_INCREMENT, `value` float DEFAULT NULL, `magnitude_id` int(11) DEFAULT NULL, `sdi

delete records from table using another table?

强颜欢笑 提交于 2019-12-13 02:07:11
问题 note: to the editors: please edit the title if have a better one :) my question is: I have two tables in my database ----------- | table1 | |----------| | id | |text | =========== ----------- | table2 | |----------| | id | |text | =========== table1 is 600,000 records table2 is 5,000,000 records !!:) what is the best way to delete all the records in table2 that are not in table1 I main by the way -the fastest way because I don't want to wait 4 hours to complete the process do you have

How to optimize painfully slow MySQL query that finds correlations

做~自己de王妃 提交于 2019-12-12 19:39:54
问题 I have a very slow (usually close to 60 seconds) MySQL query that tries to find correlations between how users voted on one poll and how they voted on all previous polls. Basically, we gather the user IDs of everyone who voted for one particular option in a given poll. Then we see how that subgroup voted on each previous poll, and compare those results to how EVERYONE (not just the subgroup) voted on that poll. The difference between the subgroup results and the total results is the deviation

Oracle Date index is slow. Query is 300 times faster without it

允我心安 提交于 2019-12-12 19:28:37
问题 I had an Oracle query as below that took 10 minutes or longer to run: select r.range_text as duration_range, nvl(count(c.call_duration),0) as calls, nvl(SUM(call_duration),0) as total_duration from call_duration_ranges r left join big_table c on c.call_duration BETWEEN r.range_lbound AND r.range_ubound and c.aaep_src = 'MAIN_SOURCE' and c.calltimestamp_local >= to_date('01-02-2014 00:00:00' ,'dd-MM-yyyy HH24:mi:ss') AND c.calltimestamp_local <= to_date('28-02-2014 23:59:59','dd-MM-yyyy HH24

Oracle 11g PL/SQL Positions of CONTANT variables in PACKAGE

馋奶兔 提交于 2019-12-12 19:20:39
问题 I have strictly optimization problem. where in my PACKAGE I should place CONSTANT variables when procedure/function is being called many times ? Let's look at this: CREATE OR REPLACE PACKAGE WB_TEST IS PROCEDURE TEST; END WB_TEST; CREATE OR REPLACE PACKAGE BODY WB_TEST IS FUNCTION PARSER(IN_PARAM IN VARCHAR2) RETURN VARCHAR2 IS LC_MSG CONSTANT VARCHAR2(80) := 'Hello USERNAME! How are you today?'; LC_PARAM CONSTANT VARCHAR2(10) := 'USERNAME'; BEGIN RETURN REPLACE(LC_MSG, LC_PARAM, IN_PARAM);

How to optimize a block of Qt code that involving huge number of sql queries?

馋奶兔 提交于 2019-12-12 17:43:42
问题 I am working on a Qt(C++) project that involves a huge number of sql queries. Basically it's a function update() that gets called ~1000 times. Each and every call takes around 25 - 30ms in my system resulting in a massive 30seconds total execution time. I believe this routine can be optimized resulting in less time consumption, but don't know how to optimize. Here is the function- void mediaProp::update(){ static QSqlQuery q1, q2, q3; static bool firstCall = true; static QString stable;

Is parameter binding implemented correctly in pymssql library?

∥☆過路亽.° 提交于 2019-12-12 16:55:53
问题 I'm calling extremely simple query from Python program using pymsqsql library. with self.conn.cursor() as cursor: cursor.execute('select extra_id from mytable where id = %d', id) extra_id = cursor.fetchone()[0] Note that parameter binding is used as described in pymssql documentation. One of the main goals of parameter binding is providing ability for DBMS engine to cache the query plan. I connected to MS SQL with Profiler and checked what queries are actually executed. It turned out that

Optimizing MySQL query to avoid scanning a lot of rows

假装没事ソ 提交于 2019-12-12 10:54:15
问题 I am running an application that is using tables similar to the below tables. There are one tables for articles and there is another table for tags. I want to get the latest 30 articles for a specific tag order by article id. for example "acer", the below query will do the job but it is not indexed correctly because it will scan a lot of rows if there are a lot of articles related to a specific tag. How to run a query to get the same result without scanning a large number of rows? EXPLAIN

Optimizing ROW_NUMBER() in SQL Server

穿精又带淫゛_ 提交于 2019-12-12 10:43:08
问题 We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current]

Suitable indexes for sorting in ranking functions

时光怂恿深爱的人放手 提交于 2019-12-12 09:45:20
问题 I have a table which keeps parent-child-relations between items. Those can be changed over time, and it is necessary to keep a complete history so that I can query how the relations were at any time. The table is something like this (I removed some columns and the primary key etc. to reduce noise): CREATE TABLE [tblRelation]( [dtCreated] [datetime] NOT NULL, [uidNode] [uniqueidentifier] NOT NULL, [uidParentNode] [uniqueidentifier] NOT NULL ) My query to get the relations at a specific time is