optimization

Optimizing subset sum implementation

淺唱寂寞╮ 提交于 2019-12-29 01:30:33
问题 I'm working on a solution to a variant of the subset sum problem, using the below code. The problem entails generating subsets of 11 ints from a larger set (superset) and check if it matches a specific value (endsum). #include <stdio.h> #include <stdlib.h> #include <assert.h> int endsum = 0, supersetsize = 0, done = 0; int superset[] = {1,30,10,7,11,27,3,5,6,50,45,32,25,67,13,37,19,52,18,9}; int combo = 0; int searchForPlayerInArray(int arr[], int player) { for (int i=0; i<11; i++) { if (arr

In R, how do I find the optimal variable to maximize or minimize correlation between several datasets

旧时模样 提交于 2019-12-28 19:03:25
问题 I am able to do this easily in Excel, but my dataset has gotten too large. In excel, I would use solver. Column A,B,C,D = random numbers Column E = random number (which I want to maximize the correlation to) Column F = A*x+B*y+C*z+D*j where x,y,z,j are coefficients resulted from solver In a separate cell, I would have correl(E,F) In solver, I would set the objective of correl(C,D) to max, by changing variables x,y and setting certain constraints: 1. A,B,C,D have to be between 0 and 1 2. A+B+C

In R, how do I find the optimal variable to maximize or minimize correlation between several datasets

独自空忆成欢 提交于 2019-12-28 19:00:20
问题 I am able to do this easily in Excel, but my dataset has gotten too large. In excel, I would use solver. Column A,B,C,D = random numbers Column E = random number (which I want to maximize the correlation to) Column F = A*x+B*y+C*z+D*j where x,y,z,j are coefficients resulted from solver In a separate cell, I would have correl(E,F) In solver, I would set the objective of correl(C,D) to max, by changing variables x,y and setting certain constraints: 1. A,B,C,D have to be between 0 and 1 2. A+B+C

Is there a faster way to convert an arbitrary large integer to a big endian sequence of bytes?

醉酒当歌 提交于 2019-12-28 17:57:13
问题 I have this Python code to do this: from struct import pack as _pack def packl(lnum, pad = 1): if lnum < 0: raise RangeError("Cannot use packl to convert a negative integer " "to a string.") count = 0 l = [] while lnum > 0: l.append(lnum & 0xffffffffffffffffL) count += 1 lnum >>= 64 if count <= 0: return '\0' * pad elif pad >= 8: lens = 8 * count % pad pad = ((lens != 0) and (pad - lens)) or 0 l.append('>' + 'x' * pad + 'Q' * count) l.reverse() return _pack(*l) else: l.append('>' + 'Q' *

Is there a faster way to convert an arbitrary large integer to a big endian sequence of bytes?

好久不见. 提交于 2019-12-28 17:57:07
问题 I have this Python code to do this: from struct import pack as _pack def packl(lnum, pad = 1): if lnum < 0: raise RangeError("Cannot use packl to convert a negative integer " "to a string.") count = 0 l = [] while lnum > 0: l.append(lnum & 0xffffffffffffffffL) count += 1 lnum >>= 64 if count <= 0: return '\0' * pad elif pad >= 8: lens = 8 * count % pad pad = ((lens != 0) and (pad - lens)) or 0 l.append('>' + 'x' * pad + 'Q' * count) l.reverse() return _pack(*l) else: l.append('>' + 'Q' *

Does C# inline properties?

不问归期 提交于 2019-12-28 14:44:16
问题 Does C# inline access to properties? I'm aware of the 32 byte (instruction?) limit on the JIT for inlining, but will it inline properties or just pure method calls? 回答1: It's up to the JIT (the C# compiler doesn't do any inlining as far as I'm aware), but I believe the JIT will inline trivial properties in most cases. Note that it won't inline members of types deriving from MarshalByRefObject which includes System.Windows.Forms.Control (via System.ComponentModel.Component ). I've also seen

Equivalent to rowMeans() for min()

喜欢而已 提交于 2019-12-28 12:30:10
问题 I have seen this question being asked multiple times on the R mailing list, but still could not find a satisfactory answer. Suppose I a matrix m m <- matrix(rnorm(10000000), ncol=10) I can get the mean of each row by: system.time(rowMeans(m)) user system elapsed 0.100 0.000 0.097 But obtaining the minimum value of each row by system.time(apply(m,1,min)) user system elapsed 16.157 0.400 17.029 takes more than 100 times as long, is there a way to speed this up? 回答1: You could use pmin , but you

Repeated integer division by a runtime constant value

本秂侑毒 提交于 2019-12-28 12:03:54
问题 At some point in my program I compute an integer divisor d . From that point onward d is going to be constant. Later in the code I will divide by that d several times - performing an integer division, since the value of d is not a compile-time known constant. Given that integer division is a relatively slow process compared to other kind of integer arithmetic, I would like to optimize it. Is there some alternative format that I could store d in, so that the division process would perform

Fastest way to sort 32bit signed integer arrays in JavaScript?

匆匆过客 提交于 2019-12-28 11:43:58
问题 _radixSort_0 = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0

C++ cache aware programming

元气小坏坏 提交于 2019-12-28 07:40:19
问题 is there a way in C++ to determine the CPU's cache size? i have an algorithm that processes a lot of data and i'd like to break this data down into chunks such that they fit into the cache. Is this possible? Can you give me any other hints on programming with cache-size in mind (especially in regard to multithreaded/multicore data processing)? Thanks! 回答1: According to "What every programmer should know about memory", by Ulrich Drepper you can do the following on Linux: Once we have a formula