mathematical-optimization

Getting standard error associated with parameter estimates from scipy.optimize.curve_fit

余生颓废 提交于 2021-02-07 03:00:41
问题 I am using scipy.optimize.curve_fit to fit a curve to some data i have. The curves, for the most part, seem to fit very well. For some reason, pcov = inf when i print it off. What i really need is to calculate the error associated with the parameters i'm fitting, and am not sure how exactly to do this even if it does give me the covariance matrix. The model being fit to is: def intensity(x,R_out,R_in,K_in,K_out,a,b,c): K_in,K_out = abs(0.0),abs(K_out) if x<=R_in: return 2*R_out*(K_out*np.sqrt

Different behaviour between MATLAB fmincon and scipy optimize minimize

a 夏天 提交于 2021-01-29 22:32:42
问题 I'm translating some code from MATLAB to python. This code simulate the behaviour of a model and I want to estimate parameters from it. The problem is that results obtained with python and with MATLAB are very different. I've tought it was related to the difference between the MATLAB's fmincon and the python scipy.optimize.minimize function, but according to this tutorial that I've found on youtube (https://www.youtube.com/watch?v=SwogAa1719M) the results are almost the same,so problems must

Different behaviour between MATLAB fmincon and scipy optimize minimize

我与影子孤独终老i 提交于 2021-01-29 12:30:54
问题 I'm translating some code from MATLAB to python. This code simulate the behaviour of a model and I want to estimate parameters from it. The problem is that results obtained with python and with MATLAB are very different. I've tought it was related to the difference between the MATLAB's fmincon and the python scipy.optimize.minimize function, but according to this tutorial that I've found on youtube (https://www.youtube.com/watch?v=SwogAa1719M) the results are almost the same,so problems must

L1-Norm minimization

匆匆过客 提交于 2021-01-29 07:46:16
问题 I am trying to minimize the following function using Linear programming. I am unable to include the image of my objective function. Click this Objective Function to view what I am trying to optimize. My question is there any library or function in python which can do this optimization for me or should I be writing the code? 回答1: import cvxpy as cp import numpy as np N=10 M=100 U = np.random.random((M,N)) m = np.random.random(M) t = cp.Variable(M) x = cp.Variable(N) prob = cp.Problem(cp

Minimizing Least Squares with Algebraic Constraints and Bounds

懵懂的女人 提交于 2021-01-28 03:12:33
问题 I'm attempting to minimize a sum of least squares based on some vector summations. Briefly, I'm creating an equation that takes ideal vectors, weights them with a determined coefficient, and then sums the weighted vectors. The sum of least squares comes in once this sum is compared to the actual vector measurements found for some observation. To give an example: # Observation A has the following measurements: A = [0, 4.1, 5.6, 8.9, 4.3] # How similar is A to ideal groups identified by the

Cplex gives two different results?

主宰稳场 提交于 2021-01-27 19:20:57
问题 I use Python API in Cplex to solve a Linear programing problem. When using Cplex, I had the result below: But then I saved my LP prolem as a lp file and use Cplex to solve again, the result was a little bit difference from the first one: Anyone gives an explanation? Below is my function: def SubProblem(myobj,myrow,mysense,myrhs,mylb): c = cplex.Cplex() c.objective.set_sense(c.objective.sense.minimize) c.variables.add(obj = myobj,lb = mylb) c.linear_constraints.add(lin_expr = myrow, senses =

Speed up search for the smallest x such that f(x) = target

与世无争的帅哥 提交于 2021-01-07 02:45:17
问题 Problem Given n , find the smallest positive x such that f(x) = n . f(x) is the sum of the digit sum of the factorials of the digits of x . For example, f(15) = digit_sum(1!) + digit_sum(5!) = digit_sum(1) + digit_sum(120) = (1) + (1 + 2 + 0) = 4 Breath first search can find the answer. Are there faster ways? Breath First Search def bfs(target, d_map): # Track which values of f(x) have we visited visited = set([0]) # f(x) of the current level of the search tree todo = [0] # Digits of x for

Speed up search for the smallest x such that f(x) = target

风格不统一 提交于 2021-01-07 02:43:43
问题 Problem Given n , find the smallest positive x such that f(x) = n . f(x) is the sum of the digit sum of the factorials of the digits of x . For example, f(15) = digit_sum(1!) + digit_sum(5!) = digit_sum(1) + digit_sum(120) = (1) + (1 + 2 + 0) = 4 Breath first search can find the answer. Are there faster ways? Breath First Search def bfs(target, d_map): # Track which values of f(x) have we visited visited = set([0]) # f(x) of the current level of the search tree todo = [0] # Digits of x for

Python Pulp using with Matrices

你离开我真会死。 提交于 2020-12-29 02:53:11
问题 I am still very new to Python, after years and years of Matlab. I am trying to use Pulp to set up an integer linear program. Given an array of numbers: {P[i]:i=1...N} I want to maximize: sum( x_i P_i ) subject to the constraints A x <= b A_eq x = b_eq and with bounds (vector based bounds) LB <= x <= UB In pulp however, I don't see how to do vector declarations properly. I was using: RANGE = range(numpy.size(P)) x = pulp.LpVariable.dicts("x", LB_ind, UB_ind, "Integer") where I can only enter

Python Pulp using with Matrices

。_饼干妹妹 提交于 2020-12-29 02:52:05
问题 I am still very new to Python, after years and years of Matlab. I am trying to use Pulp to set up an integer linear program. Given an array of numbers: {P[i]:i=1...N} I want to maximize: sum( x_i P_i ) subject to the constraints A x <= b A_eq x = b_eq and with bounds (vector based bounds) LB <= x <= UB In pulp however, I don't see how to do vector declarations properly. I was using: RANGE = range(numpy.size(P)) x = pulp.LpVariable.dicts("x", LB_ind, UB_ind, "Integer") where I can only enter