mathematical-optimization

Gurobi reports unbounded model despite mathematical impossibility

廉价感情. 提交于 2019-12-11 04:15:44
问题 I'm using Julia's wonderful JuMP package to solve a linear program with Gurobi 6.0.4 as a solver. The objective function is a sum of decision variables, clearly defined as nonnegative, and the problem requires it to be minimized. For some reason, Gurobi thinks the model is unbounded. Here is the definition of the variables and the objective: @defVar(model, delta2[i=irange,j=pair[i]] >= 0) @setObjective(model, Min, sum{delta2[i,j], i=irange, j=pair[i]}) Strange observation #1: although this is

Finding max value of a weighted subset sum of a power set

纵然是瞬间 提交于 2019-12-11 02:58:42
问题 I've got a sparse power set for an input (ie some combos have been pre-excluded). Each entry in the power set has a certain score. I want to find the combination that covers all points and maximizes the overall score. For example, let's say the input is generated as follows: function powerset(ary) { var ps = [[]]; for (var i = 0; i < ary.length; i++) { for (var j = 0, len = ps.length; j < len; j++) { ps.push(ps[j].concat(ary[i])); } } return ps; } function generateScores() { var sets =

Can I pass the objective and derivative functions to scipy.optimize.minimize as one function?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-11 02:19:09
问题 I'm trying to use scipy.optimize.minimize to minimize a complicated function. I noticed in hindsight that the minimize function takes the objective and derivative functions as separate arguments. Unfortunately, I've already defined a function which returns the objective function value and first-derivative values together -- because the two are computed simultaneously in a for loop. I don't think there is a good way to separate my function into two without the program essentially running the

How can I use scipy optimization to find the minimum chi-squared for 3 parameters and a list of data points?

穿精又带淫゛_ 提交于 2019-12-11 00:34:44
问题 I have a histogram of sorted random numbers and a Gaussian overlay. The histogram represents observed values per bin (applying this base case to a much larger dataset) and the Gaussian is an attempt to fit the data. Clearly, this Gaussian does not represent the best fit to the histogram. The code below is the formula for a Gaussian. normc, mu, sigma = 30.845, 50.5, 7 # normalization constant, avg, stdev gauss = lambda x: normc * exp( (-1) * (x - mu)**2 / ( 2 * (sigma **2) ) ) I calculated the

R DEoptim() function: How to select parameters to be opimised?

最后都变了- 提交于 2019-12-11 00:30:51
问题 I wish to estimate parameters for the following Example: But how can I make DEoptim to only optimise say 2 of the 3 parameters? - Is there a direct method to do this? rm(list=ls()) t <- seq(0.1,20,length=100) Hobs <- 20 + 8*exp(-0.05*t) Hsim <- function(p,t) {p[1] + p[2]*exp(-p[3]*t)} upper <- c(30,10,1) lower <- -upper resFun <- function(p, t, Hobs) { r <- Hobs - Hsim(p,t) return(t(r)%*%r) } DEoptim(resFun, lower, upper, Hobs = Hobs, t = t, DEoptim.control(NP = 80, itermax = 200, F = 1.2, CR

Normalization for optimization in python

好久不见. 提交于 2019-12-10 23:54:29
问题 During optimization, it is often helpful to normalize the input parameters to make them on the same order of magnitude, so the convergence can be much better. For example, if we want to minimize f(x), while a reasonable approximation is x0=[1e3, 1e-4], it might be helpful to normalize x0[0] and x0[1] to about the same order of magnitude (often O(1)). My question is, I have been using scipy.optimize and specifically, the L-BFGS-B algorithm. I was wondering that, do I need to normalize that

Do you know a C# implementation of Gauss Newton and Levenberg Marquardt methods? [closed]

允我心安 提交于 2019-12-10 21:16:49
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . I am looking for a C# implementation of both Gauss Newton & Levenberg Marquardt algorithms. Is there any "trustee" C# library out

finding the optimized location for a list of coordinates x and y

旧巷老猫 提交于 2019-12-10 20:53:58
问题 I am a new to programming and particularly python but I am trying to learn it and I found it to be very fascinating so far. I have a list of 30 fixed coordinates x and y. x = np.array([[13,10,12,13,11,12,11,13,12,13,14,15,15,16,18,2,3,4,6,9,1,3,6,7,8,10,12,11,10,30]]) y = np.array([[12,11,10,9,8,7,6,6,7,8,11,12,13,15,14,18,12,11,10,13,15,16,18,17,16,15,14,13,12,3]]) I want to find an optimized (centralized) location that can connect up to a maximum of the 10 fixed coordinates by finding the

ES calculation produces unreliable result (inverse risk) for column: 1

对着背影说爱祢 提交于 2019-12-10 19:27:29
问题 I keep getting this error: ES calculation produces unreliable result (inverse risk) for column: 1 message when using DEoptim . Maybe I am overlooking something so I need some help figuring this out. I have searched across the web but cant seem to find the answer. I have a xts object called RETS containing 127 rows and 4 columns which have log returns: library("quantmod") library("PerformanceAnalytics") library("DEoptim") e <- new.env() getSymbols("SPY;QCOR;CLNT;SRNE", from="2007-06-30", to=

parallel/multithread differential evolution in python

久未见 提交于 2019-12-10 18:53:01
问题 I'm trying to model a biochemical process, and I structured my question as an optimization problem, that I solve using differential_evolution from scipy. So far, so good, I'm pretty happy with the implementation of a simplified model with 15-19 parameters. I expanded the model and now, with 32 parameters, is taking way too long. Not totally unexpected, but still an issue, hence the question. I've seen: - an almost identical question for R Parallel differential evolution - and a github issue