mathematical-optimization

How to find the local minima of a smooth multidimensional array in NumPy efficiently?

回眸只為那壹抹淺笑 提交于 2019-11-28 05:33:04
Say I have an array in NumPy containing evaluations of a continuous differentiable function, and I want to find the local minima. There is no noise, so every point whose value is lower than the values of all its neighbors meets my criterion for a local minimum. I have the following list comprehension which works for a two-dimensional array, ignoring potential minima on the boundaries: import numpy as N def local_minima(array2d): local_minima = [ index for index in N.ndindex(array2d.shape) if index[0] > 0 if index[1] > 0 if index[0] < array2d.shape[0] - 1 if index[1] < array2d.shape[1] - 1 if

how to find global minimum in python optimization with bounds?

不打扰是莪最后的温柔 提交于 2019-11-28 05:31:20
I have a Python function with 64 variables, and I tried to optimise it using L-BFGS-B method in the minimise function, however this method have quite a strong dependence on the initial guess, and failed to find the global minimum. But I liked its ability to set bounds for the variables. Is there a way/function to find the global minimum while having boundaries for the variables ? Jacob Stevenson This can be done with scipy.optimize.basinhopping . Basinhopping is a function designed to find the global minimum of an objective function. It does repeated minimizations using the function scipy

How can I find local maxima in an image in MATLAB?

泄露秘密 提交于 2019-11-28 04:34:01
I have an image in MATLAB: y = rgb2gray(imread('some_image_file.jpg')); and I want to do some processing on it: pic = some_processing(y); and find the local maxima of the output. That is, all the points in y that are greater than all of their neighbors. I can't seem to find a MATLAB function to do that nicely. The best I can come up with is: [dim_y,dim_x]=size(pic); enlarged_pic=[zeros(1,dim_x+2); zeros(dim_y,1),pic,zeros(dim_y,1); zeros(1,dim_x+2)]; % now build a 3D array % each plane will be the enlarged picture % moved up,down,left or right, % to all the diagonals, or not at all [en_dim_y

Using min/max *within* an Integer Linear Program

為{幸葍}努か 提交于 2019-11-28 04:32:17
I'm trying to set up a linear program in which the objective function adds extra weight to the max out of the decision variables multiplied by their respective coefficients. With this in mind, is there a way to use min or max operators within the objective function of a linear program? Example: Minimize (c1 * x1) + (c2 * x2) + (c3 * x3) + (c4 * max(c1*x1, c2*x2, c3*x3)) subject to #some arbitrary integer constraints: x1 >= ... x1 + 2*x2 <= ... x3 >= ... x1 + x3 == ... Note that (c4 * max(c1*x1, c2*x2, c3*x3)) is the "extra weight" term that I'm concerned about. We let c4 denote the "extra

How to interpret “loss” and “accuracy” for a machine learning model

空扰寡人 提交于 2019-11-28 02:36:37
When I trained my neural network with Theano or Tensorflow, they will report a variable called "loss" per epoch. How should I interpret this variable? Higher loss is better or worse, or what does it mean for the final performance (accuracy) of my neural network? Amir The lower the loss, the better a model (unless the model has over-fitted to the training data). The loss is calculated on training and validation and its interperation is how well the model is doing for these two sets. Unlike accuracy, loss is not a percentage. It is a summation of the errors made for each example in training or

knapsack optimization with dynamic variables

有些话、适合烂在心里 提交于 2019-11-28 02:25:06
I am trying to solve an optimization problem, that it's very similar to the knapsack problem but it can not be solved using the dynamic programming. The problem I want to solve is very similar to this problem: Alex Fleischer indeed you may solve this with CPLEX. Let me show you that in OPL. The model (.mod) {string} categories=...; {string} groups[categories]=...; {string} allGroups=union (c in categories) groups[c]; {string} products[allGroups]=...; {string} allProducts=union (g in allGroups) products[g]; float prices[allProducts]=...; int Uc[categories]=...; float Ug[allGroups]=...; float

R optimization with equality and inequality constraints

泄露秘密 提交于 2019-11-27 23:00:16
I am trying to find the local minimum of a function, and the parameters have a fixed sum. For example, Fx = 10 - 5x1 + 2x2 - x3 and the conditions are as follows, x1 + x2 + x3 = 15 (x1,x2,x3) >= 0 Where the sum of x1, x2, and x3 have a known value, and they are all greater than zero. In R, it would look something like this, Fx = function(x) {10 - (5*x[1] + 2*x[2] + x[3])} opt = optim(c(1,1,1), Fx, method = "L-BFGS-B", lower=c(0,0,0), upper=c(15,15,15)) I also tried to use inequalities with constrOptim to force the sum to be fixed. I still think this may be a plausible work around, but I was

Example to understand scipy basin hopping optimization function

試著忘記壹切 提交于 2019-11-27 20:38:26
问题 I came across the basin hopping algorithm in scipy and created a simple problem to understand how to use it but it doesnt seem to be working correctly for that problem. May be I'm doing something completely wrong. Here is the code: import scipy.optimize as spo import numpy as np minimizer_kwargs = {"method":"BFGS"} f1=lambda x: (x-4) def mybounds(**kwargs): x = kwargs["x_new"] tmax = bool(np.all(x <= 1.0)) tmin = bool(np.all(x >= 0.0)) print x print tmin and tmax return tmax and tmin def

3 dimensional bin packing algorithms

萝らか妹 提交于 2019-11-27 17:20:59
I'm faced with a 3 dimensional bin packing problem and am currently conducting some preliminary research as to which algorithms/heuristics are currently yielding the best results. Since the problem is NP hard I do not expect to find the optimal solution in every case, but I was wondering: 1) what are the best exact solvers? Branch and Bound? What problem instance sizes can I expect to solve with reasonable computing resources? 2) what are the best heuristic solvers? 3) What off-the-shelf solutions exist to conduct some experiments with? As far as off the shelf solutions, check out MAXLOADPRO

What is an NP-complete in computer science?

谁都会走 提交于 2019-11-27 16:33:15
What is an NP-complete problem? Why is it such an important topic in computer science? Sam Hoice NP stands for Non-deterministic Polynomial time. This means that the problem can be solved in Polynomial time using a Non-deterministic Turing machine (like a regular Turing machine but also including a non-deterministic "choice" function). Basically, a solution has to be testable in poly time. If that's the case, and a known NP problem can be solved using the given problem with modified input (an NP problem can be reduced to the given problem) then the problem is NP complete. The main thing to