optimization

R optimization - Passing objective and gradient function arguments as list

拜拜、爱过 提交于 2020-01-14 13:38:30
问题 I have a function that evaluates the gradient and output simultaneously. I want to optimize it with respect to an objective function. How do I pass the objective and gradient as a list to optimx ? The below example illustrates the problem: Suppose I want to find the smallest non-negative root of the polynomial x^4 - 3*x^2 + 2*x + 3 . Its gradient is 4*x^3 - 6*x + 2 . I use the method nlminb in optimx , as shown below. optimx(par = 100, method = "nlminb", fn = function(x) x^4 - 3*x^2 + 2*x + 3

Counting elements “less than x” in an array

自闭症网瘾萝莉.ら 提交于 2020-01-14 13:31:33
问题 Let's say you want to find the first occurrence of a value 1 in a sorted array. For small arrays (where things like binary search don't pay off), you can achieve this by simply counting the number of values less than that value: the result is the index you are after. In x86 you can use adc (add with carry) for an efficient branch-free 2 implementation of that approach (with the start pointer in rdi length in rsi and the value to search for in edx ): xor eax, eax lea rdi, [rdi + rsi*4] ;

Better way to find Gameobjects without GameObject.Find

老子叫甜甜 提交于 2020-01-14 13:26:51
问题 Using Unity, I'm working on a game where all Gameobjects with a certain tag vanish/reappear fairly regularly (every 10 seconds on average). I use GameObject.FindGameObjectsWithTag() to create a Gameobject[] through which I enumerate every time that the objects need to be made visible/invisible. I cannot call it once, on Start, as new Gameobjects are created while playing. I thought that it would be worse to access and change the Gameobject[] every time something got created/destroyed. Is

Iterate over lists with a particular sum

限于喜欢 提交于 2020-01-14 12:09:57
问题 I would like to iterate over all lists of length n whose elements sum to 2. How can you do this efficiently? Here is a very inefficient method for n = 10 . Ultimately I would like to do this for `n > 25'. n = 10 for L in itertools.product([-1,1], repeat = n): if (sum(L) == 2): print L #Do something with L 回答1: you only can have a solution of 2 if you have 2 more +1 than -1 so for n==24 a_solution = [-1,]*11 + [1,]*13 now you can just use itertools.permutations to get every permutation of this

Iterate over lists with a particular sum

99封情书 提交于 2020-01-14 12:07:26
问题 I would like to iterate over all lists of length n whose elements sum to 2. How can you do this efficiently? Here is a very inefficient method for n = 10 . Ultimately I would like to do this for `n > 25'. n = 10 for L in itertools.product([-1,1], repeat = n): if (sum(L) == 2): print L #Do something with L 回答1: you only can have a solution of 2 if you have 2 more +1 than -1 so for n==24 a_solution = [-1,]*11 + [1,]*13 now you can just use itertools.permutations to get every permutation of this

Processing: How can I improve the framerate in my program?

寵の児 提交于 2020-01-14 10:34:09
问题 So I've been working in Processing for a few weeks now, and, though I'm not experienced in programming, I have moved on to more complex projects. I'm programming an evolution simulator, that spawns creatures with random properties. Eventually, I'll add reproduction, but as of now the creatures just sort of float around the screen, and follow the mouse somewhat. It interacts with sound from the line in, but I commented those parts out so that it can be viewed on the canvas, it shouldn't really

Mysteries of C++ optimization

倖福魔咒の 提交于 2020-01-14 10:13:49
问题 Take the two following snippets: int main() { unsigned long int start = utime(); __int128_t n = 128; for(__int128_t i=1; i<1000000000; i++) n = (n * i); unsigned long int end = utime(); cout<<(unsigned long int) n<<endl; cout<<end - start<<endl; } and int main() { unsigned long int start = utime(); __int128_t n = 128; for(__int128_t i=1; i<1000000000; i++) n = (n * i) >> 2; unsigned long int end = utime(); cout<<(unsigned long int) n<<endl; cout<<end - start<<endl; } I am benchmarking 128 bit

Is numpy array and python list optimized to be dynamically growing?

五迷三道 提交于 2020-01-14 09:43:44
问题 I have done over the time many things that require me using the list 's .append() function, and also numpy.append() function for numpy arrays. I noticed that both grow really slow when sizes of the arrays are big. I need an array that is dynamically growing for sizes of about 1 million elements. I can implement this myself, just like std::vector is made in C++, by adding buffer length (reserve length) that is not accessible from the outside. But do I have to reinvent the wheel? I imagine it

Why the most natural query(I.e. using INNER JOIN (instead of LEFT JOIN)) is very slow

我是研究僧i 提交于 2020-01-14 07:56:27
问题 This query takes too long. explain analyze select c.company_rec_id, c.the_company_code , c.company from tlist t -- it is questionable why this query become fast when using left join, the most natural query is inner join... join mlist m using(mlist_rec_id) join parcel_application ord_app using(parcel_application_rec_id) join parcel ord using(parcel_rec_id) join company c on c.company_rec_id = ord.client_rec_id -- ...questionable where ( 'cadmium' = '' or exists ( select * from mlist_detail md

Python multiprocessing run time per process increases with number of processes

ⅰ亾dé卋堺 提交于 2020-01-14 06:53:08
问题 I have a pool of workers which perform the same identical task, and I send each a distinct clone of the same data object. Then, I measure the run time separately for each process inside the worker function. With one process, run time is 4 seconds. With 3 processes, the run time for each process goes up to 6 seconds. With more complex tasks, this increase is even more nuanced. There are no other cpu-hogging processes running on my system, and the workers don't use shared memory (as far as I