optimization

How can I conditionally instantiate an object?

安稳与你 提交于 2020-07-09 04:33:17
问题 I'm trying to do some conditional work like so: Type object; if (cond) { doSomeStuff(); object = getObject(); doMoreStuff(); } else { doSomeOtherStuff(); object = getDifferentObject(); doEvenMoreStuff(); } use(object); The only way I can think of solving this is the duplicate the use code (which is actually inline code in my application) and declare object in each branch of the if block. If I wanted to avoid duplicate code I'd have to wrap it in some use function, as I have above. In a real

How can I use broadcasting with NumPy to speed up this correlation calculation?

喜欢而已 提交于 2020-07-08 21:29:28
问题 I'm trying to take advantage of NumPy broadcasting and backend array computations to significantly speed up this function. Unfortunately, it doesn't scale so well so I'm hoping to greatly improve the performance of this. Right now the code isn't properly utilizing broadcasting for the computations. I'm using WGCNA's bicor function as a gold standard as this is the fastest implementation I know of at the moment. The Python version outputs the same results as the R function. # =================

R nlminb What does false convergence actually mean?

爷,独闯天下 提交于 2020-07-05 02:39:44
问题 I'm using the function nlminb to maximise a function and got convergence (convergence =0 ) with the message false-convergence. I tried the documentation but no answer. I tried to get the port documentation on the function and could find the function nlminb Can anyone point me to the port documentation of nlminb or explain what does false convergence mean please ? I also tried other optimization function but though nlminb is a bit obscure, it seems to converge faster than any other function to

SSE optimized emulation of 64-bit integers

我的未来我决定 提交于 2020-07-04 13:10:05
问题 For a hobby project I'm working on, I need to emulate certain 64-bit integer operations on a x86 CPU, and it needs to be fast . Currently, I'm doing this via MMX instructions, but that's really a pain to work with, because I have to flush the fp register state all the time (and because most MMX instructions deal with signed integers, and I need unsigned behavior). So I'm wondering if the SSE/optimization gurus here on SO can come up with a better implementation using SSE. The operations I

SSE optimized emulation of 64-bit integers

天大地大妈咪最大 提交于 2020-07-04 13:08:56
问题 For a hobby project I'm working on, I need to emulate certain 64-bit integer operations on a x86 CPU, and it needs to be fast . Currently, I'm doing this via MMX instructions, but that's really a pain to work with, because I have to flush the fp register state all the time (and because most MMX instructions deal with signed integers, and I need unsigned behavior). So I'm wondering if the SSE/optimization gurus here on SO can come up with a better implementation using SSE. The operations I

Graphing Scipy optimize.minimize convergence results each iteration?

笑着哭i 提交于 2020-07-03 17:34:50
问题 I would like to perform some tests on my optimisation routine using scipy.optimize.minimize , in particular graphing the convergence (or the rather objective function) each iteration, over multiple tests. Suppose I have the following linearly constrained quadratic optimisation problem: minimise: x_i Q_ij x_j + a|x_i| subject to: sum(x_i) = 1 I can code this as: def _fun(x, Q, a): c = np.einsum('i,ij,j->', x, Q, x) p = np.sum(a * np.abs(x)) return c + p def _constr(x): return np.sum(x) - 1 And

Java getChars method in Integer class, why is it using bitwise operations instead of arithmetic?

我与影子孤独终老i 提交于 2020-07-03 07:19:11
问题 So I was examining the Integer 's class source code ( JDK 8 ) to understand how an int get converted to a String . It seems to be using a package private method called getChars (line 433) to convert an int to char array. While the code is not that difficult to understand, however, there are multiple lines of code where they use bitwise shift operations instead of simple arithmetic multiplication/division, such as the following lines of code: // really: r = i - (q * 100); r = i - ((q << 6) +

Java getChars method in Integer class, why is it using bitwise operations instead of arithmetic?

陌路散爱 提交于 2020-07-03 07:18:10
问题 So I was examining the Integer 's class source code ( JDK 8 ) to understand how an int get converted to a String . It seems to be using a package private method called getChars (line 433) to convert an int to char array. While the code is not that difficult to understand, however, there are multiple lines of code where they use bitwise shift operations instead of simple arithmetic multiplication/division, such as the following lines of code: // really: r = i - (q * 100); r = i - ((q << 6) +

Java getChars method in Integer class, why is it using bitwise operations instead of arithmetic?

人盡茶涼 提交于 2020-07-03 07:17:59
问题 So I was examining the Integer 's class source code ( JDK 8 ) to understand how an int get converted to a String . It seems to be using a package private method called getChars (line 433) to convert an int to char array. While the code is not that difficult to understand, however, there are multiple lines of code where they use bitwise shift operations instead of simple arithmetic multiplication/division, such as the following lines of code: // really: r = i - (q * 100); r = i - ((q << 6) +

how to write a generator for keras model for predict_generator

耗尽温柔 提交于 2020-06-29 06:49:24
问题 I have a trained keras model, and I am trying to run predictions with CPU only. I want this to be as quick as possible, so I thought I would use predict_generator with multiple workers. All of the data for my prediction tensor are loaded into memory beforehand. Just for reference, array is a list of tensors, with the first tensor having shape [nsamples, x, y, nchannels]. I made a thread-safe generator following the instructions here (I followed this when using fit_generator as well). class