I\'d like to estimate the big-oh performance of some methods in a library through benchmarks. I don\'t need precision -- it suffices to show that someth
We have lately implemented a tool that does semi-automated average runtime analysis for JVM code. You do not even have to have access to the sources. It is not published yet (still ironing out some usability flaws) but will be soon, I hope.
It is based on maximum-likelihood models of program execution [1]. In short, byte code is augmented with cost counters. The target algorithm is then run (distributed, if you want) on a bunch of inputs whose distribution you control. The aggregated counters are extrapolated to functions using involved heuristics (method of least squares on crack, sort of). From those, more science leads to an estimate for the average runtime asymptotics (3.576n - 1.23log(n) + 1.7
, for instance). For example, the method is able to reproduce rigorous classic analyses done by Knuth and Sedgewick with high precision.
The big advantage of this method compared to what others post is that you are independent of time estimates, that is in particular independent of machine, virtual machine and even programming language. You really get information about your algorithm, without all the noise.
And---probably the killer feature---it comes with a complete GUI that guides you through the whole process.
See my answer on cs.SE for a little more detail and further references. You can find a preliminary website (including a beta version of the tool and the papers published) here.
(Note that average runtime can be estimated that way while worst case runtime can never be, except in case you know the worst case. If you do, you can use the average case for worst case analysis; just feed the tool only worst case instances. In general, runtime bounds can not be decided, though.)