mathematical-optimization

3 dimensional bin packing algorithms

↘锁芯ラ 提交于 2019-11-26 22:32:43
问题 I'm faced with a 3 dimensional bin packing problem and am currently conducting some preliminary research as to which algorithms/heuristics are currently yielding the best results. Since the problem is NP hard I do not expect to find the optimal solution in every case, but I was wondering: 1) what are the best exact solvers? Branch and Bound? What problem instance sizes can I expect to solve with reasonable computing resources? 2) what are the best heuristic solvers? 3) What off-the-shelf

How to interpret “loss” and “accuracy” for a machine learning model

一世执手 提交于 2019-11-26 22:28:13
问题 When I trained my neural network with Theano or Tensorflow, they will report a variable called "loss" per epoch. How should I interpret this variable? Higher loss is better or worse, or what does it mean for the final performance (accuracy) of my neural network? 回答1: The lower the loss, the better a model (unless the model has over-fitted to the training data). The loss is calculated on training and validation and its interperation is how well the model is doing for these two sets. Unlike

Why should weights of Neural Networks be initialized to random numbers?

梦想与她 提交于 2019-11-26 21:18:11
I am trying to build a neural network from scratch. Across all AI literature there is a consensus that weights should be initialized to random numbers in order for the network to converge faster. But why are neural networks initial weights initialized as random numbers? I had read somewhere that this is done to "break the symmetry" and this makes the neural network learn faster. How does breaking the symmetry make it learn faster? Wouldn't initializing the weights to 0 be a better idea? That way the weights would be able to find their values (whether positive or negative) faster? Is there some

R optimization with equality and inequality constraints

ぐ巨炮叔叔 提交于 2019-11-26 21:16:41
问题 I am trying to find the local minimum of a function, and the parameters have a fixed sum. For example, Fx = 10 - 5x1 + 2x2 - x3 and the conditions are as follows, x1 + x2 + x3 = 15 (x1,x2,x3) >= 0 Where the sum of x1, x2, and x3 have a known value, and they are all greater than zero. In R, it would look something like this, Fx = function(x) {10 - (5*x[1] + 2*x[2] + x[3])} opt = optim(c(1,1,1), Fx, method = "L-BFGS-B", lower=c(0,0,0), upper=c(15,15,15)) I also tried to use inequalities with

Solving an integer linear program: why are solvers claiming a solvable instance is infeasible?

陌路散爱 提交于 2019-11-26 17:49:31
问题 I'm trying to solve integer programming problems. I've tried both use SCIP and LPSolve For example, given the final values of A and B, I want to solve for valA in the following C# code: Int32 a = 0, b = 0; a = a*-6 + b + 0x74FA - valA; b = b/3 + a + 0x81BE - valA; a = a*-6 + b + 0x74FA - valA; b = b/3 + a + 0x81BE - valA; // a == -86561, b == -32299 Which I implemented as this integer program in lp format (the truncating division causes a few complications): min: ; +valA >= 0; +valA < 92;

mathematical optimization library for Java — free or open source recommendations? [closed]

落爺英雄遲暮 提交于 2019-11-26 13:18:52
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 6 years ago . Does anyone know of such a library that performs mathematical optimization (linear programming, convex optimization, or more general

Best open source Mixed Integer Optimization Solver [closed]

拟墨画扇 提交于 2019-11-26 12:36:54
问题 I am using CPLEX for solving huge optimization models (more than 100k variables) now I\'d like to see if I can find an open source alternative, I solve mixed integer problems (MILP) and CPLEX works great but it is very expensive if we want to scale so I really need to find an alternative or start writing our own ad-hoc optimization library (which will be painful) Any suggestion/insight would be much appreciated 回答1: I personally found GLPK better (i.e. faster) than LP_SOLVE. It supports

Why should weights of Neural Networks be initialized to random numbers?

徘徊边缘 提交于 2019-11-26 12:16:35
问题 I am trying to build a neural network from scratch. Across all AI literature there is a consensus that weights should be initialized to random numbers in order for the network to converge faster. But why are neural networks initial weights initialized as random numbers? I had read somewhere that this is done to \"break the symmetry\" and this makes the neural network learn faster. How does breaking the symmetry make it learn faster? Wouldn\'t initializing the weights to 0 be a better idea?

Which is better way to calculate nCr

泄露秘密 提交于 2019-11-26 12:07:34
问题 Approach 1: C(n,r) = n!/(n-r)!r! Approach 2: In the book Combinatorial Algorithms by wilf, i have found this: C(n,r) can be written as C(n-1,r) + C(n-1,r-1) . e.g. C(7,4) = C(6,4) + C(6,3) = C(5,4) + C(5,3) + C(5,3) + C(5,2) . . . . . . . . After solving = C(4,4) + C(4,1) + 3*C(3,3) + 3*C(3,1) + 6*C(2,1) + 6*C(2,2) As you can see, the final solution doesn\'t need any multiplication. In every form C(n,r), either n==r or r==1. Here is the sample code i have implemented: int foo(int n,int r) {

How to display progress of scipy.optimize function?

旧城冷巷雨未停 提交于 2019-11-26 11:01:31
问题 I use scipy.optimize to minimize a function of 12 arguments. I started the optimization a while ago and still waiting for results. Is there a way to force scipy.optimize to display its progress (like how much is already done, what are the current best point)? 回答1: As mg007 suggested, some of the scipy.optimize routines allow for a callback function (unfortunately leastsq does not permit this at the moment). Below is an example using the "fmin_bfgs" routine where I use a callback function to