convex-optimization

CVXPY throws SolverError

旧巷老猫 提交于 2019-12-12 18:27:36
问题 When using CVXPY, I frequently get "SolverError". Their doc just says this is caused by numerical issues, but no further information is given about how to avoid them. The following code snippet is an example, the problem is trivial, but the 'CVXOPT' solver just throws "SolverError". It is true that if we change the solver to another one, like 'ECOS', the problem will be solved as expected. But the point is, 'CVXOPT' should in principle solve this trivial problem and it really baffles me why

Minimizing quadratic function subject to norm inequality constraint

杀马特。学长 韩版系。学妹 提交于 2019-12-12 07:24:00
问题 I am trying to solve the following inequality constraint: Given time-series data for N stocks, I am trying to construct a portfolio weight vector to minimize the variance of the returns. the objective function: min w^{T}\sum w s.t. e_{n}^{T}w=1 \left \| w \right \|\leq C where w is the vector of weights, \sum is the covariance matrix, e_{n}^{T} is a vector of ones, C is a constant. Where the second constraint ( \left \| w \right \| ) is an inequality constraint (2-norm of the weights). I

How to tell if Newtons-Method Fails

我怕爱的太早我们不能终老 提交于 2019-12-11 12:47:00
问题 I am creating a basic Newton-method algorithm for an unconstrained optimization problem, and my results from the algorithm are not what I expected. It is a simple objective function so it is clear that the algorithm should converge on (1,1). This is confirmed by a gradient descent algorithm I created previously, here: def grad_descent(x, t, count, magnitude): xvalues.append(x) gradvalues.append(np.array([dfx1(x), dfx2(x)])) fvalues.append(f(x)) temp=x-t*dfx(x) x = temp magnitude = mag(dfx(x))

Getting more details from optim function from R

喜夏-厌秋 提交于 2019-12-07 11:45:19
问题 I'm not very familiar with the optim function, and I wanted to get these informations from its results: a) how many iterations were needed for achieving the result? and b) to plot the sequence of partial solutions, that is, the solution obtained in the end of each iteration. My code until now looks like this: f1 <- function(x) { x1 <- x[1] x2 <- x[2] x1^2 + 3*x2^2 } res <- optim(c(1,1), f1, method="CG") How can I improve it to get further information? Thanks in advance 回答1: You could modify

Alternatives to FMINCON

*爱你&永不变心* 提交于 2019-12-06 12:21:35
问题 Are there any faster and more efficient solvers other than fmincon? I'm using fmincon for a specific problem and I run out of memory for modest sized vector variable. I don't have any supercomputers or cloud computing options at my disposal, either. I know that any alternate solution will still run out of memory but I'm just trying to see where the problem is. P.S. I don't want a solution that would change the way I'm approaching the actual problem. I know convex optimization is the way to go

Getting more details from optim function from R

老子叫甜甜 提交于 2019-12-05 19:14:21
I'm not very familiar with the optim function, and I wanted to get these informations from its results: a) how many iterations were needed for achieving the result? and b) to plot the sequence of partial solutions, that is, the solution obtained in the end of each iteration. My code until now looks like this: f1 <- function(x) { x1 <- x[1] x2 <- x[2] x1^2 + 3*x2^2 } res <- optim(c(1,1), f1, method="CG") How can I improve it to get further information? Thanks in advance You could modify your function to store the values that are passed into it into a global list. i <- 0 vals <- list() f1 <-

Alternatives to FMINCON

眉间皱痕 提交于 2019-12-04 15:58:49
Are there any faster and more efficient solvers other than fmincon? I'm using fmincon for a specific problem and I run out of memory for modest sized vector variable. I don't have any supercomputers or cloud computing options at my disposal, either. I know that any alternate solution will still run out of memory but I'm just trying to see where the problem is. P.S. I don't want a solution that would change the way I'm approaching the actual problem. I know convex optimization is the way to go and I have already done enough work to get up until here. P.P.S I saw the other question regarding the

How do I implement the optimization function in tensorflow?

二次信任 提交于 2019-12-02 17:05:09
问题 minΣ(||xi-X ci||^2+ λ ||ci||), s.t cii = 0, where X is a matrix of shape d * n and C is of the shape n * n, xi and ci means a column of X and C separately. X is known here and based on X we want to find C. 回答1: Usually with a loss like that you need to vectorize it, instead of working with columns: loss = X - tf.matmul(X, C) loss = tf.reduce_sum(tf.square(loss)) reg_loss = tf.reduce_sum(tf.square(C), 0) # L2 loss for each column reg_loss = tf.reduce_sum(tf.sqrt(reg_loss)) total_loss = loss +

How to calculate weight to minimize variance?

六月ゝ 毕业季﹏ 提交于 2019-12-02 05:44:58
问题 given several vectors: x1 = [3 4 6] x2 = [2 8 1] x3 = [5 5 4] x4 = [6 2 1] I wanna find weight w1, w2, w3 to each item, and get the weighted sum of each vector: yi = w1*i1 + w2*i2 + w3*i3 . for example, y1 = 3*w1 + 4*w2 + 6*w3 to make the variance of these values(y1, y2, y3, y4) to be minimized. notice: w1, w2, w3 should > 0, and w1 + w2 + w3 = 1 I don't know what kind of problems it should be... and how to solve it in python or matlab? 回答1: You can start with building a loss function stating

How to calculate weight to minimize variance?

白昼怎懂夜的黑 提交于 2019-12-02 03:40:45
given several vectors: x1 = [3 4 6] x2 = [2 8 1] x3 = [5 5 4] x4 = [6 2 1] I wanna find weight w1, w2, w3 to each item, and get the weighted sum of each vector: yi = w1*i1 + w2*i2 + w3*i3 . for example, y1 = 3*w1 + 4*w2 + 6*w3 to make the variance of these values(y1, y2, y3, y4) to be minimized. notice: w1, w2, w3 should > 0, and w1 + w2 + w3 = 1 I don't know what kind of problems it should be... and how to solve it in python or matlab? You can start with building a loss function stating the variance and the constraints on w 's. The mean is m = (1/4)*(y1 + y2 + y3 + y4) . The variance is then