query-optimization

Query optimization and API throttling

半腔热情 提交于 2019-12-03 03:36:57
We are tracking Facebook Page and Post metrics for a number of clients, and we have some questions regarding high CPU intensity and too many calls for Post/comments - according to what is being reported by the developer insights console (Insights -> Developer -> Activity & Errors). The documentation is somewhat unclear on the limits and restrictions for the Graph API, and we'd simply like to make sure we have the correct understanding of what resources we have available. We are working on optimizing our software and queries to decrease the error rate and number of requests. Related to this

How to compare two queries?

让人想犯罪 __ 提交于 2019-12-03 03:14:02
问题 How can I compare two queries X and Y and say that X is better than Y, when they both take almost the same time in small cases scenarios? The problem is that I have two queries that are supposed to run on a very big database, so run and evaluate is not quite an option. Therefore, we created a small database to perform some tests. Evaluating which query is better is a problem, since on our test base, they run in almost the same time (about 5 minutes). Besides the time taken to return, what is

T-SQL Where Clause Case Statement Optimization (optional parameters to StoredProc)

荒凉一梦 提交于 2019-12-03 02:56:56
I've been battling this one for a while now. I have a stored proc that takes in 3 parameters that are used to filter. If a specific value is passed in, I want to filter on that. If -1 is passed in, give me all. I've tried it the following two ways: First way: SELECT field1, field2...etc FROM my_view WHERE parm1 = CASE WHEN @PARM1= -1 THEN parm1 ELSE @PARM1 END AND parm2 = CASE WHEN @PARM2 = -1 THEN parm2 ELSE @PARM2 END AND parm3 = CASE WHEN @PARM3 = -1 THEN parm3 ELSE @PARM3 END Second Way: SELECT field1, field2...etc FROM my_view WHERE (@PARM1 = -1 OR parm1 = @PARM1) AND (@PARM2 = -1 OR

Downsides to “WITH SCHEMABINDING” in SQL Server?

烂漫一生 提交于 2019-12-03 02:55:29
问题 I have a database with hundreds of awkwardly named tables in it (CG001T, GH066L, etc), and I have views on every one with its "friendly" name (the view "CUSTOMERS" is "SELECT * FROM GG120T", for example). I want to add "WITH SCHEMABINDING" to my views so that I can have some of the advantages associated with it, like being able to index the view, since a handful of views have computed columns that are expensive to compute on the fly. Are there downsides to SCHEMABINDING these views? I've

Handling large databases

回眸只為那壹抹淺笑 提交于 2019-12-03 02:51:53
问题 I have been working in a web project(asp.net) for around six months. The final product is about to go live. The project uses SQL Server as the database. We have done performance testing with some large volumes of data, results show that performance degrades when data becomes too large, say 2 million rows (timeout issues, delayed reponses, etc). At first we were using fully normailized database, but now we made it partially normalized due to performance issues (to reduce joins). First of all,

Force MySQL to use two indexes on a Join

眉间皱痕 提交于 2019-12-03 02:33:01
I am trying to force MySQL to use two indexes. I am joining a table and I want to utilize the cross between the two indexes. The specific term is Using intersect and here is a link to MySQL documentation: http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html Is there any way to force this implementation? My query was using it (and it sped stuff up), but now for whatever reason it has stopped. Here is the JOIN I want to do this on. The two indexes I want the query to use are scs.CONSUMER_ID_1 and scs_CONSUMER_ID_2 JOIN survey_customer_similarity AS scs ON cr.CONSUMER_ID=scs

How to search millions of record in SQL table faster?

时光怂恿深爱的人放手 提交于 2019-12-03 02:16:44
I have SQL table with millions of domain name. But now when I search for let's say SELECT * FROM tblDomainResults WHERE domainName LIKE '%lifeis%' It takes more than 10 minutes to get the results. I tried indexing but that didn't help. What is the best way to store this millions of record and easily access these information in short period of time? There are about 50 million records and 5 column so far. Most likely, you tried a traditional index which cannot be used to optimize LIKE queries unless the pattern begins with a fixed string (e.g. 'lifeis%'). What you need for your query is a full

Improving OFFSET performance in PostgreSQL

和自甴很熟 提交于 2019-12-03 00:22:15
问题 I have a table I'm doing an ORDER BY on before a LIMIT and OFFSET in order to paginate. Adding an index on the ORDER BY column makes a massive difference to performance (when used in combination with a small LIMIT). On a 500,000 row table, I saw a 10,000x improvement adding the index, as long as there was a small LIMIT. However, the index has no impact for high OFFSETs (i.e. later pages in my pagination). This is understandable: a b-tree index makes it easy to iterate in order from the

Query cache efficiency

ぃ、小莉子 提交于 2019-12-03 00:17:17
I'm using MySQLTuner.pl to optimize my site.... though I'm not entirely sure how to resolve some of these issues and am wondering if someone can help me out. I'm running 16GB of RAM with the following MySQL settings: key_buffer = 1024M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1500 table_cache = 256 thread_concurrency = 4 query_cache_limit = 2M query_cache_size = 32M query_cache_type = 1 tmp_table_size = 512M max_heap_table_size = 128M join_buffer_size = 128M myisam_sort_buffer_size = 512M Here's the output of my tuner --------

Optimizing select with transaction under SQLite 3

橙三吉。 提交于 2019-12-02 22:24:01
I read that wrapping a lot of SELECT into BEGIN TRANSACTION/COMMIT was an interesting optimization. But are these commands really necessary if I use " PRAGMA journal_mode = OFF " before? (Which, if I remember, disables the log and obviously the transaction system too.) BigMacAttack "Use transactions – even if you’re just reading the data. This may yield a few milliseconds." I'm not sure where the Katashrophos.net blog is getting this information, but wrapping SELECT statements in transactions does nothing. Transactions are always and only used when making changes to the database, and