I\'d like to estimate the big-oh performance of some methods in a library through benchmarks. I don\'t need precision -- it suffices to show that someth
I actually know beforehand the big-oh of most of the methods that will be tested. My main intention is to provide performance regression testing for them.
This requirement is key. You want to detect outliers with minimal data (because testing should be fast, dammit), and in my experience fitting curves to numerical evaluations of complex recurrences, linear regression and the like will overfit. I think your initial idea is a good one.
What I would do to implement it is prepare a list of expected complexity functions g1, g2, ..., and for data f, test how close to constant f/gi + gi/f is for each i. With a least squares cost function, this is just computing the variance of that quantity for each i and reporting the smallest. Eyeball the variances at the end and manually inspect unusually poor fits.