How to force larger steps on scipy.optimize functions?

匿名 (未验证) 提交于 2019-12-03 02:20:02

问题:

I have a function compare_images(k, a, b) that compares two 2d-arrays a and b

Inside the funcion, I apply a gaussian_filter with sigma=k to a My idea is to estimate how much I must to smooth image a in order for it to be similar to image b

The problem is my function compare_images will only return different values if k variation is over 0.5, and if I do fmin(compare_images, init_guess, (a, b) it usually get stuck to the init_guess value.

I believe the problem is fmin (and minimize) tends to start with very small steps, which in my case will reproduce the exact same return value for compare_images, and so the method thinks it already found a minimum. It will only try a couple times.

Is there a way to force fmin or any other minimizing function from scipy to take larger steps? Or is there any method better suited for my need?

EDIT: I found a temporary solution. First, as recommended, I used xtol=0.5 and higher as an argument to fmin. Even then, I still had some problems, and a few times fmin would return init_guess. I then created a simple loop so that if fmin == init_guess, I would generate another, random init_guess and try it again.

It's pretty slow, of course, but now I got it to run. It will take 20h or so to run it for all my data, but I won't need to do it again.

Anyway, to better explain the problem for those still interested in finding a better solution:

  • I have 2 images, A and B, containing some scientific data.
  • A looks like a few dots with variable value (it's a matrix of in which each valued point represents where a event occurred and it's intensity)
  • B looks like a smoothed heatmap (it is the observed density of occurrences)
  • B looks just like if you applied a gaussian filter to A with a bit of semi-random noise.
  • We are approximating B by applying a gaussian filter with constant sigma to A. This sigma was chosen visually, but only works for a certain class of images.
  • I'm trying to obtain an optimal sigma for each image, so later I could find some relations of sigma and the class of event showed in each image.

Anyway, thanks for the help!

回答1:

Quick check: you probably really meant fmin(compare_images, init_guess, (a,b))?

If gaussian_filter behaves as you say, your function is piecewise constant, meaning that optimizers relying on derivatives (i.e. most of them) are out. You can try a global optimizer like anneal, or brute-force search over a sensible range of k's.

However, as you described the problem, in general there will only be a clear, global minimum of compare_images if b is a smoothed version of a. Your approach makes sense if you want to determine the amount of smoothing of a that makes both images most similar.

If the question is "how similar are the images", then I think pixelwise comparison (maybe with a bit of smoothing) is the way to go. Depending on what images we are talking about, it might be necessary to align the images first (e.g. for comparing photographs). Please clarify :-)

edit: Another idea that might help: rewrite compare_images so that it calculates two versions of smoothed-a -- one with sigma=floor(k) and one with ceil(k) (i.e. round k to the next-lower/higher int). Then calculate a_smooth = a_floor*(1-kfrac)+a_ceil*kfrac, with kfrac being the fractional part of k. This way the compare function becomes continuous w.r.t k.

Good Luck!



回答2:

Basin hopping may do a bit better, as it has a high chance of continuing anyway when it gets stuck at the plateau's.

I found on this example function that it does reasonably well with a low temperature:

>>> opt.basinhopping(lambda (x,y): int(0.1*x**2 + 0.1*y**2), (5,-5), T=.1)     nfev: 409      fun: 0        x: array([ 1.73267813, -2.54527514])  message: ['requested number of basinhopping iterations completed successfully']     njev: 102      nit: 100 


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!