Empirically estimating big-oh time efficiency

前端 未结 10 1947
清酒与你
清酒与你 2020-12-23 16:52

Background

I\'d like to estimate the big-oh performance of some methods in a library through benchmarks. I don\'t need precision -- it suffices to show that someth

10条回答
  •  情书的邮戳
    2020-12-23 17:16

    I actually know beforehand the big-oh of most of the methods that will be tested. My main intention is to provide performance regression testing for them.

    This requirement is key. You want to detect outliers with minimal data (because testing should be fast, dammit), and in my experience fitting curves to numerical evaluations of complex recurrences, linear regression and the like will overfit. I think your initial idea is a good one.

    What I would do to implement it is prepare a list of expected complexity functions g1, g2, ..., and for data f, test how close to constant f/gi + gi/f is for each i. With a least squares cost function, this is just computing the variance of that quantity for each i and reporting the smallest. Eyeball the variances at the end and manually inspect unusually poor fits.

提交回复
热议问题