database-performance

Multiple table or one single table?

南笙酒味 提交于 2020-01-24 10:51:48
问题 I already saw a few forums with this question but they do not answer one thing I want to know. I'll explain first my topic: I have a system where each log of multiple users are entered to the database (ex. User1 logged in, User2 logged in, User1 entered User management, User2 changed password, etc). So I would be expecting 100 to 200 entries per user per day. Right now, I'm doing it in a single table and to view it, I just have to filter out using UserID. My question is, which is more

Caching Data on a Heavy Load Web Server

徘徊边缘 提交于 2020-01-16 19:03:46
问题 I currently have a web application which on each page request gets user data out of a database, for the currently logged in user. This web application could have approximately 30 thousands concurrent users. My question is would it be best to cache this. For example in C# using System.Web.HttpRuntime.Cache.Add or would this cripple the servers memory storing up to 30 thousand user objects in the memory? Would it be better to not cache and just get the required data from the database on each

Caching Data on a Heavy Load Web Server

核能气质少年 提交于 2020-01-16 19:03:06
问题 I currently have a web application which on each page request gets user data out of a database, for the currently logged in user. This web application could have approximately 30 thousands concurrent users. My question is would it be best to cache this. For example in C# using System.Web.HttpRuntime.Cache.Add or would this cripple the servers memory storing up to 30 thousand user objects in the memory? Would it be better to not cache and just get the required data from the database on each

Make LEFT JOIN query more efficient

。_饼干妹妹 提交于 2020-01-16 18:12:48
问题 The following query with LEFT JOIN is drawing too much memory (~4GB), but the host only allows about 120MB for this process. SELECT grades.grade, grades.evaluation_id, evaluations.evaluation_name, evaluations.value, evaluations.maximum FROM grades LEFT JOIN evaluations ON grades.evaluation_id = evaluations.evaluation_id WHERE grades.registrar_id = ? Create table syntax for grades: CREATE TABLE `grades` ( `grade_id` int(11) unsigned NOT NULL AUTO_INCREMENT, `evaluation_id` int(10) unsigned

PostgreSQL: Create index on length of all table fields

情到浓时终转凉″ 提交于 2020-01-15 03:31:04
问题 I have a table called profile , and I want to order them by which ones are the most filled out. Each of the columns is either a JSONB column or a TEXT column. I don't need this to a great degree of certainty, so typically I've ordered as follow: SELECT * FROM profile ORDER BY LENGTH(CONCAT(profile.*)) DESC; However, this is slow, and so I want to create an index. However, this does not work: CREATE INDEX index_name ON profile (LENGTH(CONCAT(*)) Nor does CREATE INDEX index_name ON profile

Postgresql in memory database django

本秂侑毒 提交于 2020-01-14 09:31:14
问题 For performance issues I would like to execute an optimization algorithm on an in memory database in django (I'm likely to execute a lot of queries). I know it's possible to use a sqlite in memory (How to run Django's test database only in memory?) but I would rather use postgresql because our prod database is a postgresql one. Does someone knows how to tell django to create the postgresql database in the memory ? Thanks in advance 回答1: This is premature optimization. Postgresql is very very

How reliable is the cost measurement in PostgreSQL Explain Plan?

被刻印的时光 ゝ 提交于 2020-01-13 10:13:33
问题 The queries are performed on a large table with 11 million rows. I have already performed an ANALYZE on the table prior to the query executions. Query 1: SELECT * FROM accounts t1 LEFT OUTER JOIN accounts t2 ON (t1.account_no = t2.account_no AND t1.effective_date < t2.effective_date) WHERE t2.account_no IS NULL; Explain Analyze: Hash Anti Join (cost=480795.57..1201111.40 rows=7369854 width=292) (actual time=29619.499..115662.111 rows=1977871 loops=1) Hash Cond: ((t1.account_no)::text = (t2

In MySQL, how to build index to speed up this query?

一笑奈何 提交于 2020-01-09 14:07:49
问题 In MySQL, how to build index to speed up this query? SELECT c1, c2 FROM t WHERE c3='foobar'; 回答1: To really give a answer it would be useful to see if you have existing indexes already, but... All this is assuming table 't' exists and you need to add an index and you only currently have a single index on your primary key or no indexes at all. A covering index for the query will give best performance for your needs, but with any index you sacrifice some insertion speed. How much that sacrifice

SQL Query - long running / taking up CPU resource

时光总嘲笑我的痴心妄想 提交于 2020-01-06 03:27:26
问题 Hello I have the below SQL query that is taking on average 40 minutes to run, one of the tables that it references has over 7 million records in it. I have ran this through the database tuning advisor and applied all recommendations, also I have assesed it within the activity monitor in sql and no further indexes etc have been recommended. Any suggestions would be great, thanks in advance WITH CTE AS ( SELECT r.Id AS ResultId, r.JobId, r.CandidateId, r.Email, CAST(0 AS BIT) AS EmailSent, NULL

SQLite: .Net much slower than native?

两盒软妹~` 提交于 2020-01-04 02:07:08
问题 Here is my query: SELECT * FROM [GeoName] WHERE ((-26.3665122100029-Lat)*(-26.3665122100029-Lat))+((27.5978928658078-Long)*(27.5978928658078-Long)) < 0.005 ORDER BY ((-26.3665122100029-Lat)*(-26.3665122100029-Lat))+((27.5978928658078-Long)*(27.5978928658078-Long)) LIMIT 20 This returns the 20 closest points. Running this in native sqlite returns a result within 78ms, but from within the .Net sqlite environment it takes nearly 1400ms. Any suggestions? I have this query within my ORM structure