MATLABs 'fminsearch' different from Octave's 'fmincg'

 ̄綄美尐妖づ 提交于 2019-12-01 04:14:20

问题


I am trying to get consistent answers for a simple optimization problem, between two functions in MATLAB and Octave. Here is my code:

  options = optimset('MaxIter', 500 , 'Display', 'iter', 'MaxFunEvals', 1000);

  objFunc = @(t) lrCostFunction(t,X,y);

  [result1] = fminsearch(objFunc, theta, options);
  [result2]=  fmincg (objFunc, theta, options);

(Bear in mind, that X, y, and theta are defined earlier and are correct). The problem is the following: When I run the above code in MATLAB with it using fmincg, (commend out fminsearch), I get the correct answer.

However, if I comment out fmincg and let us run fminsearch, I get no conversion whatsoever. In fact the output looks like this:

   491          893         0.692991         reflect
   492          894         0.692991         reflect
   493          895         0.692991         reflect
   494          896         0.692991         reflect
   495          897         0.692991         reflect
   496          898         0.692991         reflect
   497          899         0.692991         reflect
   498          900         0.692991         reflect
   499          901         0.692991         reflect
   500          902         0.692991         reflect



Exiting: Maximum number of iterations has been exceeded
         - increase MaxIter option.
         Current function value: 0.692991 

Increasing the number of iterations doesnt do jack. In contrast, when using the fmincg, I see it converging, and it finally gives me the correct result:

Iteration     1 | Cost: 2.802128e-001
Iteration     2 | Cost: 9.454389e-002
Iteration     3 | Cost: 5.704641e-002
Iteration     4 | Cost: 4.688190e-002
Iteration     5 | Cost: 3.759021e-002
Iteration     6 | Cost: 3.522008e-002
Iteration     7 | Cost: 3.234531e-002
Iteration     8 | Cost: 3.145034e-002
Iteration     9 | Cost: 3.008919e-002
Iteration    10 | Cost: 2.994639e-002
Iteration    11 | Cost: 2.678528e-002
Iteration    12 | Cost: 2.660323e-002
Iteration    13 | Cost: 2.493301e-002

.
.
.


Iteration   493 | Cost: 1.311466e-002
Iteration   494 | Cost: 1.311466e-002
Iteration   495 | Cost: 1.311466e-002
Iteration   496 | Cost: 1.311466e-002
Iteration   497 | Cost: 1.311466e-002
Iteration   498 | Cost: 1.311466e-002
Iteration   499 | Cost: 1.311466e-002
Iteration   500 | Cost: 1.311466e-002

This gives the correct asnwer.

So what gives? Why is fminsearch not working in this minimization case?

Additional context:

1) Octave is the language that has fmincg btw, however a quick google result also retrieves this function. My MATLAB can call either.

2) My problem has a convex error surface, and its error surface is everywhere differentiable.

3) I only have access to fminsearch, fminbnd (which I cant use since this problem is multivariate not univariate), so that leaves fminsearch. Thanks!


回答1:


I assume that fmincg is implementing a conjugate-gradient type optimization. fminsearch is a derivative-free optimization method. So, why do you expect them to give the same results. They are completely different algorithms.

I would expect fminsearch to find the global minima for a convex cost function. At least, this has been my experience so far.

The first line of fminsearch's output suggest that objFunc(theta) is ~0.69 but this value is very different than the cost values in fmincg's output. So, I would look for possible bugs outside fminsearch. Make sure you are giving the same cost function and initial point to both algorithms.




回答2:


This is problem I've noticed sometimes with this algorithm. It may not be the answer you are looking for, but what seems to work for me, in these cases, is to modify the tolerance values at which it terminates. What I see is an oscillation between two points providing equal results. I know this happens in LabView, and can only speculate that it happens in Matlab.

Unless I see you data, I can't comment more, but that is what I suggest.

Note: by increasing the tolerance, the goal is to catch the algorithm before it reaches that state. It becomes less precise, but usually the number of significant digits is rather small anyways.



来源:https://stackoverflow.com/questions/10770934/matlabs-fminsearch-different-from-octaves-fmincg

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!