mathematical-optimization

How do I speed up profiled NumPy code - vectorizing, Numba?

谁都会走 提交于 2019-12-10 17:46:20
问题 I am running a large Python program to optimize portfolio weights for (Markowitz) portfolio optimization in finance. When I Profile the code, 90% of the run time is spent calculating the portfolio return, which is done millions of times. What can I do to speed up my code? I have tried: vectorizing the calculation of returns: made the code slower , from 1.5 ms to 3 ms used the function autojit from Numba to speed up the code: no change See example below - any suggestions? import numpy as np

NLopt SLSQP discards good solution in favour of older, worse solution

只谈情不闲聊 提交于 2019-12-10 17:29:29
问题 I'm solving a standard optimisation problem from Finance - portfolio optimisation. The vast majority of the time, NLopt is returning a sensible solution. However, on rare occasions, the SLSQP algorithm appears to iterate to the correct solution, and then for no obvious reason it chooses to return a solution from about one third of the way through the iterative process that is very obviously suboptimal. Interestingly, changing the initial parameter vector by a very small amount can fix the

How to augment lpsolve R optimization solution to run on a hadoop cluster?

醉酒当歌 提交于 2019-12-10 15:06:08
问题 I am using R lpsolve package to optimize my transportation model. My code runs fine but it takes a lot of time to run as I have huge number of nodes and paths. I am planning to run my code over hadoop cluster. Please guide me regarding changes that i need to make to my code. I think that running optimization over hadoop cluster might be impossible as we might end up with local minimums instead of the global minimum. I search internet for terms like "lpsolve hadoop" but didn't get anything

Use mod function in a constraint using Python Pulp

岁酱吖の 提交于 2019-12-10 12:16:39
问题 I am writing a LpProblem and I need to create a constraint where the sum of some variables is multiples of 100... 100, 200, 300... I am trying the next expressions using mod(), round() and int() but none works because they don't support LpAffineExpression. probl += lpSum([vars[h] for h in varSKU if h[2] == b]) % 100 == 0 probl += lpSum([vars[h] for h in varSKU if h[2] == b]) / 100 == int(lpSum([vars[h] for h in varSKU if h[2] == b]) / 100) probl += lpSum([vars[h] for h in varSKU if h[2] == b]

Trying to understand code that computes the gradient wrt to the input for LogSoftMax in Torch

空扰寡人 提交于 2019-12-10 12:02:18
问题 Code comes from: https://github.com/torch/nn/blob/master/lib/THNN/generic/LogSoftMax.c I don't see how this code is computing the gradient w.r.t to the input for the module LogSoftMax. What I'm confused about is what the two for loops are doing. for (t = 0; t < nframe; t++) { sum = 0; gradInput_data = gradInput_data0 + dim*t; output_data = output_data0 + dim*t; gradOutput_data = gradOutput_data0 + dim*t; for (d = 0; d < dim; d++) sum += gradOutput_data[d]; for (d = 0; d < dim; d++) gradInput

How can I adjust parameters for image processing algorithm in an efficient way?

拈花ヽ惹草 提交于 2019-12-10 11:40:22
问题 Before starting implementation of solution for my problem I just want to be sure if I will not “reinvent wheel” and if I can reuse work that someone have done before. So my problem is: I have made image matcher using OpenCV library. This matcher receives a set of image files and trying to find similar images in database. At the end it returns statistical results according to ROC Curves definition (True Positive, True Negative, False Positive and False Negative number of matches). These

How do I specify multiple variable constraints using Integer Programming in PuLP?

时光总嘲笑我的痴心妄想 提交于 2019-12-10 11:39:09
问题 I am trying to solve the Bin Packing Problem using the Integer Programming Formulation in Python PuLP. The model for the problem is as follows: I have written the following Python Code using the PuLP library from pulp import * #knapsack problem def knapsolve(bins, binweight, items, weight): prob = LpProblem('BinPacking', LpMinimize) y = [LpVariable("y{0}".format(i+1), cat="Binary") for i in range(bins)] xs = [LpVariable("x{0}{1}".format(i+1, j+1), cat="Binary") for i in range(items) for j in

Constrained optimization for nonlinear multivariable function in Java

青春壹個敷衍的年華 提交于 2019-12-10 02:45:09
问题 I am looking for an open source implementation of a method doing constrained optimization for nonlinear multivariable function in Java . 回答1: There are several open source java implementations that can do this, such as: OptaPlanner (apache license, 100% java, lots of examples and documentation) jacop choco ... 回答2: IPOPT is the most robust solver I know of. It has a Java interface although I have no idea how good that is, I only use the C++ API. 回答3: I recently ported Michael Powells' COBYLA2

How to perform discrete optimization of functions over matrices?

夙愿已清 提交于 2019-12-10 02:26:54
问题 I would like to optimize over all 30 by 30 matrices with entries that are 0 or 1. My objective function is the determinant. One way to do this would be some sort of stochastic gradient descent or simulated annealing. I looked at scipy.optimize but it doesn't seem to support this sort of optimization as far as I can tell. scipy.optimize.basinhopping looked very tempting but it seems to require continuous variables. Are there any tools in Python for this sort of general discrete optimization?

Scipy.optimize.minimize method='SLSQP' ignores constraint

為{幸葍}努か 提交于 2019-12-10 02:26:49
问题 I'm using SciPy for optimization and the method SLSQP seems to ignore my constraints. Specifically, I want x[3] and x[4] to be in the range [0-1] I'm getting the message: 'Inequality constraints incompatible' Here is the results of the execution followed by an example code (uses a dummy function): status: 4 success: False njev: 2 nfev: 24 fun: 0.11923608071680103 x: array([-10993.4278558 , -19570.77080806, -23495.15914299, -26531.4862831 , 4679.97660534]) message: 'Inequality constraints