quadratic-programming

Optimisation in swi prolog

拈花ヽ惹草 提交于 2019-12-05 10:55:50
Say I want to find argmax(x,y,z) -1/2(20x^2+32xy +16y^2)+2x+2y. subject to: x>=0, y>=0,z>=0 and -x-y+z =0. I know the partial derivatives being set to 0 is : -20x-16y+2=0 and -16x-16y+2 =0 so we could have x= 0 and y =1/8 and z=1/8. How would I do this in Swi-prolog? I see that there is library simplex for linear solving, but this is a quadratic problem but the partial derivatives are not. (I am a bit confused!) This is what I have: :- use_module(library(simplex)). my_constraints(S):- gen_state(S0), constraint([-20*x, -16*y] = 0, S0, S1), constraint([-16*x,-16*y] = 0, S1,S2), constraint([x] >=

CVXOPT QP Solver: TypeError: 'A' must be a 'd' matrix with 1000 columns

ぃ、小莉子 提交于 2019-12-01 17:11:20
I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine def svm(X, Y, c): m = len(X) P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T)) q = matrix(np.ones(m) * -1) g1 = np.asarray(np.diag(np.ones(m) * -1)) g2 = np.asarray(np.diag(np.ones(m))) G = matrix(np.append(g1, g2, axis=0)) h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0)) A = np.reshape((Y.T), (1,m)) b = matrix([0]) print (A).shape A = matrix(A) sol = solvers.qp(P, q, G, h, A, b) print sol Here X is a 1000 X 2 matrix and Y has the same number of labels. The solver throws the following

CVXOPT QP Solver: TypeError: 'A' must be a 'd' matrix with 1000 columns

限于喜欢 提交于 2019-12-01 16:14:27
问题 I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine def svm(X, Y, c): m = len(X) P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T)) q = matrix(np.ones(m) * -1) g1 = np.asarray(np.diag(np.ones(m) * -1)) g2 = np.asarray(np.diag(np.ones(m))) G = matrix(np.append(g1, g2, axis=0)) h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0)) A = np.reshape((Y.T), (1,m)) b = matrix([0]) print (A).shape A = matrix(A) sol = solvers.qp(P, q, G, h, A, b)

MATLAB: Find abbreviated version of matrix that minimises sum of matrix elements

こ雲淡風輕ζ 提交于 2019-12-01 03:26:25
I have a 151 -by- 151 matrix A . It's a correlation matrix, so there are 1 s on the main diagonal and repeated values above and below the main diagonal. Each row/column represents a person. For a given integer n I will seek to reduce the size of the matrix by kicking people out, such that I am left with a n-by-n correlation matrix that minimises the total sum of the elements. In addition to obtaining the abbreviated matrix, I also need to know the row number of the people who should be booted out of the original matrix (or their column number - they'll be the same number). As a starting point

MATLAB: Find abbreviated version of matrix that minimises sum of matrix elements

别等时光非礼了梦想. 提交于 2019-11-30 23:09:11
问题 I have a 151 -by- 151 matrix A . It's a correlation matrix, so there are 1 s on the main diagonal and repeated values above and below the main diagonal. Each row/column represents a person. For a given integer n I will seek to reduce the size of the matrix by kicking people out, such that I am left with a n-by-n correlation matrix that minimises the total sum of the elements. In addition to obtaining the abbreviated matrix, I also need to know the row number of the people who should be booted

Linear regression with constraints on the coefficients

断了今生、忘了曾经 提交于 2019-11-30 20:55:10
I am trying to perform linear regression, for a model like this: Y = aX1 + bX2 + c So, Y ~ X1 + X2 Suppose I have the following response vector: set.seed(1) Y <- runif(100, -1.0, 1.0) And the following matrix of predictors: X1 <- runif(100, 0.4, 1.0) X2 <- sample(rep(0:1,each=50)) X <- cbind(X1, X2) I want to use the following constraints on the coefficients: a + c >= 0 c >= 0 So no constraint on b. I know that the glmc package can be used to apply constraints, but I was not able to determine how to apply it for my constraints. I also know that contr.sum can be used so that all coefficients

Linear regression with constraints on the coefficients

北城余情 提交于 2019-11-30 04:43:58
问题 I am trying to perform linear regression, for a model like this: Y = aX1 + bX2 + c So, Y ~ X1 + X2 Suppose I have the following response vector: set.seed(1) Y <- runif(100, -1.0, 1.0) And the following matrix of predictors: X1 <- runif(100, 0.4, 1.0) X2 <- sample(rep(0:1,each=50)) X <- cbind(X1, X2) I want to use the following constraints on the coefficients: a + c >= 0 c >= 0 So no constraint on b. I know that the glmc package can be used to apply constraints, but I was not able to determine

How to convert quadratic to linear program?

陌路散爱 提交于 2019-11-28 23:44:31
I have an optimization problem that has in the objective function 2 multiplied variables, making the model quadratic. I am currently using zimpl, to parse the model, and glpk to solve it. As they don't support quadratic programming, I would need to convert this to an MILP. . The first variable is real, in range [0, 1], the second one is real, from range 0 to inf. This one could without a problem be integer. The critical part in the objective function looks like this: max ... + var1 * var2 + ... I had similar problems in the constraints, but they were easily solvable. How could I solve this