I have a Python function with 64 variables, and I tried to optimise it using L-BFGS-B method in the minimise function, however this method have quite a strong dependence on
Some common-sense suggestions for debugging and visualizing any optimizer on your function:
Are your objective function and your constraints reasonable ?
If the objective function is a sum say f() + g()
,
print those separately for all the x
in "fx-opt.nptxt"
(below);
if f()
is 99 % of the sum and g()
1 %, investigate.
Constraints: how many of the components x_i
in xfinal
are stuck at bounds,
x_i <= lo_i
or >= hi_i
?
title = "%s n %d ntermhess %d nsample %d seed %d" % ( # all params!
__file__, n, ntermhess, nsample, seed )
print title
...
np.random.seed(seed) # for reproducible runs
np.set_printoptions( threshold=100, edgeitems=10, linewidth=100,
formatter = dict( float = lambda x: "%.3g" % x )) # float arrays %.3g
lo, hi = bounds.T # vecs of numbers or +- np.inf
print "lo:", lo
print "hi:", hi
fx = [] # accumulate all the final f, x
for jsample in range(nsample):
# x0 uniformly random in box lo .. hi --
x0 = lo + np.random.uniform( size=n ) * (hi - lo)
x, f, d = fmin_l_bfgs_b( func, x0, approx_grad=1,
m=ntermhess, factr=factr, pgtol=pgtol )
print "f: %g x: %s x0: %s" % (f, x, x0)
fx.append( np.r_[ f, x ])
fx = np.array(fx) # nsample rows, 1 + dim cols
np.savetxt( "fx-opt.nptxt", fx, fmt="%8.3g", header=title ) # to analyze / plot
ffinal = fx[:,0]
xfinal = fx[:,1:]
print "final f values, sorted:", np.sort(ffinal)
jbest = ffinal.argmin()
print "best x:", xfinal[jbest]
If some of the ffinal
values look reasonably good,
try more random startpoints near those --
that's surely better than pure random.
If the x
s are curves, or anything real, plot the best few x0
and xfinal
.
(A rule of thumb is nsample ~ 5*d or 10*d in d
dimensions.
Too slow, too many ? Reduce maxiter
/ maxeval
, reduce ftol
--
you don't need ftol
1e-6 for exploration like this.)
If you want reproducible results,
then you must list ALL relevant parameters in the title
and in derived files and plots.
Otherwise, you'll be asking "where did this come from ??"
from scipy.optimize._numdiff import approx_derivative # 3-point, much better than
## from scipy.optimize import approx_fprime
for eps in [1e-3, 1e-6]:
grad = approx_fprime( x, func, epsilon=eps )
print "approx_fprime eps %g: %s" % (eps, grad)
If however the gradient estimate is poor / bumpy before the optimizer quit,
you won't see that.
Then you have to save all the intermediate [f, x, approx_fprime]
to watch them too; easy in python -- ask if that's not clear.
In some problem areas it's common to back up and restart from a purported xmin
.
For example, if you're lost on a country road,
first find a major road, then restart from there.