optimization

Python multiprocessing run time per process increases with number of processes

笑着哭i 提交于 2020-01-14 06:53:06
问题 I have a pool of workers which perform the same identical task, and I send each a distinct clone of the same data object. Then, I measure the run time separately for each process inside the worker function. With one process, run time is 4 seconds. With 3 processes, the run time for each process goes up to 6 seconds. With more complex tasks, this increase is even more nuanced. There are no other cpu-hogging processes running on my system, and the workers don't use shared memory (as far as I

Run through large generator iterable on GPU

人走茶凉 提交于 2020-01-14 06:51:09
问题 I recently received help with optimizing my code to use generators to save on memory while running code that needs to check many permutations. To put it in perspective, I believe the generator is iterating over a list that has 2! * 2! * 4! * 2! * 2! * 8! * 4! * 10! elements in it. Unfortunately, while I now no longer run out of memory generating the permutations, it is taking >24 hours to run my code. Is it possible to parallelize this through GPU? Generating the iterator with all the above

Run through large generator iterable on GPU

扶醉桌前 提交于 2020-01-14 06:51:03
问题 I recently received help with optimizing my code to use generators to save on memory while running code that needs to check many permutations. To put it in perspective, I believe the generator is iterating over a list that has 2! * 2! * 4! * 2! * 2! * 8! * 4! * 10! elements in it. Unfortunately, while I now no longer run out of memory generating the permutations, it is taking >24 hours to run my code. Is it possible to parallelize this through GPU? Generating the iterator with all the above

Getting intermediate info. from PuLP

岁酱吖の 提交于 2020-01-14 06:14:07
问题 Getting intermediate result, while PuLP is trying to find the optimal and feasible solution. As you know, solving Mixed integer Linear programming (MILP) cases may take a long time. I'm trying to get intermediate results from PuLP optimization package, while it is running. I know it is possible to do that in Gurobi, which is a commercial optimization package. I'm not sure about the code I can use in PuLP package to get that information. Any advice would be appreciated. 回答1: Pulp doesn't

MATLAB genetic algorithm optimization returns integer values higher than boundaries and violates inequality constraints. Why?

天大地大妈咪最大 提交于 2020-01-14 06:13:27
问题 I'm using MATLAB R2016a genetic algorithm optimization toolbox to optimize 80 integer values. I have these constraints: x(80) > x(79) > x(78) > x(77) > x(76) ... x(5) > x(4) > x(3) > x(2) > x(1) The range for all integer variables is between 1 and 500. I used this code in MATLAB: f = @(x)Cost_function(x, my_data); num_of_var = 80; for mx = 1:num_of_var-1 A(mx,:) = [zeros(1,mx-1),1,-1, zeros(1,num_of_var-mx-1)]; end b = repmat(-3, [num_of_var-1,1]); lb = ones([num_of_var-1,1]); up = repmat(500

Alignment of data members and member functions for performance

依然范特西╮ 提交于 2020-01-14 04:54:26
问题 Is it true aligning data members of a struct/class no longer yields the benefits it used to, especially on nehalem because of hardware improvements? If so, is it still the case that alignment will always make better performance, just very small noticeable improvements compared with on past CPUs? Does alignment of member variables extend to member functions? I believe I once read (it could be on the wikibooks "C++ performance") that there are rules for "packing" member functions into various

Speed up for loop with numpy

蹲街弑〆低调 提交于 2020-01-14 03:46:06
问题 How can this next for-loop get a speedup with numpy? I guess some fancy indexing-trick can be used here, but i have no idea which one (can einsum be used here?). a=0 for i in range(len(b)): a+=numpy.mean(C[d,e,f+b[i]])*g[i] edit: C is a numpy 3D array of shape comparable to (20, 1600, 500) . d,e,f are indices of points that are "interesting" (lengths of d,e,f are the same and around 900) b and g have the same length (around 50). The mean is taken over all the points in C with the indices d,e

Image Optimization

倾然丶 夕夏残阳落幕 提交于 2020-01-14 03:32:44
问题 I want to know like converting an image (gif or jpeg) to png8 using yslow smushit will increase the speed of the site performance? Will that work in ie6? 回答1: It depends on the image. PNG is suited to images with blocks of color, whereas jpeg is good for photo type images. Smushit will shave off any extraneous bytes, reducing the filesize, but if you have many small images in separate files, then you should consider spriting them in order to reduce the number of connections required to load

using dynamic criteria in SQL query [duplicate]

血红的双手。 提交于 2020-01-14 03:12:12
问题 This question already has answers here : How to search with multiple criteria from a database with SQL? (6 answers) Closed 5 months ago . If I have a query that runs a search with optional parameters, my usual approach is to pass in NULL for any unused criteria, and have a WHERE clause that looks like WHERE (@Param IS NULL OR Field=@Param) AND (@Param2 IS NULL OR Field2=@Param2) If I have a condition that maybe is a little more complex to evaluate, say a LIKE clause, is SQL Server going to

PHP cli Memory usage optimization

寵の児 提交于 2020-01-14 03:07:35
问题 I am trying to code a custom url_rewriter for squid. & also with using some other url_rewriter programs like squidGuard so have to use a wrapper to able use both or any other program. when i try to loop with php. (that's the way how squid communicates with external programs. STDIN/STDOUT. it gives you a url & you have to send the new one or old one back. ) it has a devastating memory usage even doing nothing. i've changed to wrap it with another bash script it is only a few lines. & it loops