database-performance

Rails: How to split write/read query across master/slave database

眉间皱痕 提交于 2019-12-17 20:57:27
问题 My website has a very heavy read traffic. A lot heavier than write traffic. To improve the performance of my website I have thought of going with master/slave database configuration. The octupus gem seems to provide what I want, but since my app is huge I can't go though a millions of source code line to change the query distribution(sending read query to slave server and write query to master server). MySQL Proxy seems to be a great way to resolve this issue but since it is in alpha version

LOWER LIKE vs iLIKE

无人久伴 提交于 2019-12-17 17:32:25
问题 How does the performance of the following two query components compare? LOWER LIKE ... LOWER(description) LIKE '%abcde%' ... iLIKE ... description iLIKE '%abcde%' ... 回答1: The answer depends on many factors like Postgres version, encoding and locale - LC_COLLATE in particular. The bare expression lower(description) LIKE '%abc%' is typically a bit faster than description ILIKE '%abc%' , and either is a bit faster than the equivalent regular expression: description ~* 'abc' . This matters for

How big can a MySQL database get before performance starts to degrade

妖精的绣舞 提交于 2019-12-17 03:44:29
问题 At what point does a MySQL database start to lose performance? Does physical database size matter? Do number of records matter? Is any performance degradation linear or exponential? I have what I believe to be a large database, with roughly 15M records which take up almost 2GB. Based on these numbers, is there any incentive for me to clean the data out, or am I safe to allow it to continue scaling for a few more years? 回答1: The physical database size doesn't matter. The number of records don

SQL converting rows data in to column using single table

空扰寡人 提交于 2019-12-14 03:57:45
问题 I am trying to convert one of DB table row in to column and using PIVOT function with cursors, here is the Sql: DECLARE Cur CURSOR FOR SELECT DISTINCT CounterId FROM AllCounterIds DECLARE @Temp NVARCHAR(MAX), @AllCounterIds NVARCHAR(MAX), @CounterIdQuery NVARCHAR(MAX) SET @AllCounterIds = '' OPEN Cur -- Getting all the movies FETCH NEXT FROM Cur INTO @Temp WHILE @@FETCH_STATUS = 0 BEGIN SET @AllCounterIds = @AllCounterIds + '[' + @Temp + '],' FETCH NEXT FROM Cur INTO @Temp END CLOSE Cur

Improving DELETE and INSERT times on a large table that has an index structure

元气小坏坏 提交于 2019-12-14 03:50:48
问题 Our application manages a table containing a per-user set of rows that is the result of a computationally-intensive query. Storing this result in a table seems a good way of speeding up further calculations. The structure of that table is basically the following: CREATE TABLE per_user_result_set ( user_login VARCHAR2(N) , result_set_item_id VARCHAR2(M) , CONSTRAINT result_set_pk PRIMARY KEY(user_login, result_set_item_id) ) ; A typical user of our application will have this result set

Does the MySQL IN clause execute the subquery multiple times?

我怕爱的太早我们不能终老 提交于 2019-12-13 13:39:12
问题 Given this SQL query in MySQL: SELECT * FROM tableA WHERE tableA.id IN (SELECT id FROM tableB); Does MySQL execute the subquery SELECT id FROM tableB multiple times for each row in tableA ? Is there a way to make sql go faster without using variables or store procedures? Why is this often slower than using LEFT JOIN ? 回答1: Your assumption is false; the subquery will be executed only once. The reason why it's slower than a join is because IN can't take advantage of indexes; it has to scan its

Verifying uniqueness on a batch save with a huge amount of objects - Parse Performance

妖精的绣舞 提交于 2019-12-13 08:14:46
问题 I'm doing a parse batch save requestas follows: Parse.Object.saveAll(nameGenderArrayToSave) nameGenderArrayToSave is an array with thousands of objects to save. I'm also interested in guarante the uniqueness of my data. So I have a beforeSave hook that do it for me: Parse.Cloud.beforeSave("NameGender", function(request, response) { if (!request.object.isNew()) { // Let existing object updates go through response.success(); } var query = new Parse.Query("NameGender"); // Add query filters to

How to tune self-join table in mysql like this?

随声附和 提交于 2019-12-13 06:14:09
问题 I have this table which I'm trying to select from and to date. The query took 2 min to run on 4 million records. I'm not sure how much more I can squeeze out of this query. SELECT c.fk_id, c.from_date, c.fk_pb, MIN(o.from_date) AS to_date FROM TABLE_X c INNER JOIN TABLE_X o ON c.fk_id = o.fk_id AND c.fk_pb = o.fk_pb WHERE o.from_date > c.from_date GROUP BY c.fk_id, c.from_date, c.fk_pb There are indexes on from_date, fk_pb and fk_id already. The schema is like this. +-------------------------

JMeter: How to benchmark data deletion from database table in batches?

我怕爱的太早我们不能终老 提交于 2019-12-13 04:05:12
问题 I am trying to compare the performance difference between DELETE batch sizes using JMeter. I have a table which I populate with a large amount of test data. Next, I have a JDBC Request that runs the following statement: delete from tbl where (entry_dt < '2019-02-01') and (rownum <= 10000); I want to keep running this until the table is empty, and record the time taken to clear the table. I will run this thread multiple times to get an average execution time, and repeat this process for

quickest sparse matrix access, when disk is involved

强颜欢笑 提交于 2019-12-13 03:59:06
问题 Imagine you have a table 'users' with 10 Mio records and a table 'groups' with 1 mio records. In average you have 50 users per group which I would store at least in an rdbms in a table called users2groups. users2groups is in fact a sparse matrix. Only 80% of the full dataset of users and groups fit into available memory. The data for the group membership (users2groups) comes on top, so that if memory is needed to cache group memberships this has to be deallocated from either the users or the