optimization

Splitting up a list into different length parts under special condition

安稳与你 提交于 2019-12-25 16:01:11
问题 I need an algorithm of dividing different manufacturing parts in to uneven groups. The main condition is that difference between maximum number in the group and all others should be as low as possible. For example: if we have list [1,3,4,11,12,19,20,21] and we decide that it should be divided in 3 parts it should be divided into [1,3,4],[11,12],[19,20,21] . In the same case if we decide to divide it in to 4 we would get : [1,3,4],[11],[12],[19,20,21]. In order to clarify "difference between

Efficient partial search of a trie in python

﹥>﹥吖頭↗ 提交于 2019-12-25 13:45:20
问题 This is a hackerrank exercise, and although the problem itself is solved, my solution is apparently not efficient enough, so on most test cases I'm getting timeouts. Here's the problem: We're going to make our own Contacts application! The application must perform two types of operations: add name , where name is a string denoting a contact name. This must store as a new contact in the application. find partial , where partial is a string denoting a partial name to search the application for.

Updating gigantic HTML table with ajax

时光总嘲笑我的痴心妄想 提交于 2019-12-25 13:22:08
问题 Have a table with 2000+ rows and I need to have a single div on each row updated periodically. I hoped to update every 10 seconds, but the page becomes quite sluggish. The server has 32gb RAM and my laptop has 8gb RAM, both have at least two quad cores. Here's the div & update call: <div id='div_$id' name='div_$id'></div> <script language='javascript'> new Ajax.PeriodicalUpdater('div_$id', 'upd.php',{ method: 'post', frequency: 10, decay: 1, parameters: {id:'$id'}} ); </script> I'm using the

MySQL - self join optimization

寵の児 提交于 2019-12-25 13:07:12
问题 I have a table of phone events by HomeId. Each row has an EventId (on hook, off hook, ring, DTMF, etc), TimeStamp, Sequence (auto increment) and HomeId. Im working on a query to find specific types of occurrences(IE inbound or outbound calls) and duration. I had planned on doing this using a multiple self-join on this table to pick out the sequences of events that usually indicate one type of occurrence or the other. EG inbound calls would be a period of inactivity followed by no DTMF, then

Fastest way to copy several 2-dimensional arrays into one 1-dimensional array (in C)

点点圈 提交于 2019-12-25 12:12:54
问题 I want to copy several 2-dimensional subarrays of 3-dimensional arrays (e.g. array1[n][rows][cols], ..., array4[n][rows][cols]), which are dynamically allocated (but with fixed length), into a 1-dimensional array (e.g. array[4*rows*cols]), which is statically allocated, in C. As there will be many rows and columns (e.g. 10000 rows and 500 columns), I was wondering which of the following three possibilities will be the fastest: for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ array[i*cols+j]=array1[2

Fastest way to copy several 2-dimensional arrays into one 1-dimensional array (in C)

 ̄綄美尐妖づ 提交于 2019-12-25 12:12:06
问题 I want to copy several 2-dimensional subarrays of 3-dimensional arrays (e.g. array1[n][rows][cols], ..., array4[n][rows][cols]), which are dynamically allocated (but with fixed length), into a 1-dimensional array (e.g. array[4*rows*cols]), which is statically allocated, in C. As there will be many rows and columns (e.g. 10000 rows and 500 columns), I was wondering which of the following three possibilities will be the fastest: for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ array[i*cols+j]=array1[2

can it be executed faster with big amount of data [MySQL]

做~自己de王妃 提交于 2019-12-25 11:48:44
问题 is there any way how to optimize next query: EXPLAIN EXTENDED SELECT keyword_id, ck.keyword, COUNT( article_id ) AS cnt FROM career_article_keyword LEFT JOIN career_keywords ck USING ( keyword_id ) WHERE keyword_id IN ( SELECT keyword_id FROM career_article_keyword LEFT JOIN career_keywords ck USING ( keyword_id ) WHERE article_id IN ( SELECT article_id FROM career_article_keyword WHERE keyword_id =9 ) AND keyword_id <>9 ) GROUP BY keyword_id ORDER BY cnt DESC The main task here if I have

Optimizing MySQL Queries: Is it always possible to optimize a query so that it doesn't use “ALL”

旧街凉风 提交于 2019-12-25 11:24:11
问题 According to the MySQL documentation regarding Optimizing Queries With Explain: * ALL: A full table scan is done for each combination of rows from the previous tables. This is normally not good if the table is the first table not marked const, and usually very bad in all other cases. Normally, you can avoid ALL by adding indexes that allow row retrieval from the table based on constant values or column values from earlier tables. Does this mean that any query that uses ALL can be optimized so

Lazily evaluate MySQL view

▼魔方 西西 提交于 2019-12-25 09:16:29
问题 I have some MySQL views which define a number of extra columns based on some relatively straightforward subqueries. The database is also multi-tenanted so each row has a company ID against it. The problem I have is my views are evaluated for every row before being filtered by the company ID, giving huge performance issues. Is there any way to lazily evaluate the view so the 'where' clause in the outer query applies to the subqueries in the view. Or is there something similar to views that I

Coursera ML - Does the choice of optimization algorithm affect the accuracy of multiclass logistic regression?

≡放荡痞女 提交于 2019-12-25 08:59:41
问题 I recently completed exercise 3 of Andrew Ng's Machine Learning on Coursera using Python. When initially completing parts 1.4 to 1.4.1 of the exercise, I ran into difficulties ensuring that my trained model has the accuracy that matches the expected 94.9%. Even after debugging and ensuring that my cost and gradient functions were bug free, and that my predictor code was working correctly, I was still getting only 90.3% accuracy. I was using the conjugate gradient (CG) algorithm in scipy