benchmarking

How not to optimize away - mechanics of a folly function

女生的网名这么多〃 提交于 2019-11-28 23:13:30
I was searching for a programming technique that would ensure variables used for benchmarking (without observable side effects) won't be optimized away by the compiler This gives some info, but I ended up using folly and the following function /** * Call doNotOptimizeAway(var) against variables that you use for * benchmarking but otherwise are useless. The compiler tends to do a * good job at eliminating unused variables, and this function fools * it into thinking var is in fact needed. */ #ifdef _MSC_VER #pragma optimize("", off) template <class T> void doNotOptimizeAway(T&& datum) { datum =

What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?

夙愿已清 提交于 2019-11-28 22:20:47
问题 I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be. I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-) Are there any examples of this sort of thing being done? What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs? Notes: I'm not interested in browser/OS detection. At the moment, I'm not

Array vs Slice: accessing speed

风流意气都作罢 提交于 2019-11-28 21:25:27
This question is about the speed of accessing elements of arrays and slices, not about the efficiency of passing them to functions as arguments. I would expect arrays to be faster than slices in most cases because a slice is a data structure describing a contiguous section of an array and so there may be an extra step involved when accessing elements of a slice (indirectly the elements of its underlying array). So I wrote a little test to benchmark a batch of simple operations. There are 4 benchmark functions, the first 2 test a global slice and a global array, the other 2 test a local slice

Measure (max) memory usage with IPython—like timeit but memit

狂风中的少年 提交于 2019-11-28 21:04:41
问题 I have a simple task: in addition to measuring the time it takes to execute a chunk of code in Python, I need to measure the amount of memory a given chunk of code needs. IPython has a nice utility called timeit which works like this: In [10]: timeit 3 + 3 10000000 loops, best of 3: 24 ns per loop What I'm looking for is something like this: In [10]: memit 3 + 3 10000000 loops, best of 3: 303 bytes per loop I'm aware that this probably does not come built in with IPython—but I like the timeit

benchmarks: does python have a faster way of walking a network folder?

隐身守侯 提交于 2019-11-28 20:57:43
问题 I need to walk through a folder with approximately ten thousand files. My old vbscript is very slow in handling this. Since I've started using Ruby and Python since then, I made a benchmark between the three scripting languages to see which would be the best fit for this job. The results of the tests below on a subset of 4500 files on a shared network are Python: 106 seconds Ruby: 5 seconds Vbscript: 124 seconds That Vbscript would be slowest was no surprise but I can't explain the difference

How to measure file read speed without caching?

为君一笑 提交于 2019-11-28 20:42:44
My java program spends most time by reading some files and I want to optimize it, e.g., by using concurrency, prefetching, memory mapped files , or whatever. Optimizing without benchmarking is a non-sense, so I benchmark. However, during the benchmark the whole file content gets cached in RAM, unlike in the real run. Thus the run-times of the benchmark are much smaller and most probably unrelated to the reality. I'd need to somehow tell the OS (Linux) not to cache the file content, or better to wipe out the cache before each benchmark run. Or maybe consume most of the available RAM (32 GB), so

Measure (profile) time spent in each target of a Makefile

牧云@^-^@ 提交于 2019-11-28 20:36:59
Is there a way to echo the (system, user, real) time spent in each target of a Makefile recursively when I do make all ? I'd like to benchmark the compilation of a project in a more granular way than just time make all . Ideally, it would echo a tree of the executed target, each one with the time spent in all its dependencies. It'd be great also if it could work with -j (parallel make). And by the way my Makefile is non-recursive (doesn't spawn another make instance for each main targets). Thanks! Gnu Make uses the $(SHELL) variable to execute commands in the targets. By default it is set to

Why does C# execute Math.Sqrt() more slowly than VB.NET?

烈酒焚心 提交于 2019-11-28 20:02:47
Background While running benchmark tests this morning, my colleagues and I discovered some strange things concerning performance of C# code vs. VB.NET code. We started out comparing C# vs. Delphi Prism calculating prime numbers, and found that Prism was about 30% faster. I figured CodeGear optimized code more when generating IL (the exe was about twice as big as C#'s and had all sorts of different IL in it.) I decided to write a test in VB.NET as well, assuming that Microsoft's compilers would end up writing essentially the same IL for each language. However, the result there was more shocking

Java vs C#: Are there any studies that compare their execution speed?

旧街凉风 提交于 2019-11-28 18:33:34
Taking out all of the obvious caveats related to benchmarks and benchmark comparison, is there any study (an array of well documented and unbiased tests) that compares the average execution speed of the two mentioned languages? Thanks The best comparison that I am aware of is The Computer Language Benchmarks Game . It compares speed, memory use and source code size for (currently) 10 benchmarks across a large number of programming languages. The implementations of the benchmarks are user-submitted and there are continuous improvements, so the standings shift around somewhat. The comparison is

Looking for benchmarking code snippet (c++)

女生的网名这么多〃 提交于 2019-11-28 17:53:43
Some loading routines in my program takes to long to complete. I want a quick small snippet for checking how long a function took to execute. By small I mean "preferably without 3rd party libraries". Maybe something as simple as taking the system time? start = current_system_time() load_something() delta = current_system_time()-start log_debug("load took "+delta) Edit: Target OS in question is Windows. Your answer: Yes Caveat: That WON'T work in multihtreaded code or multiple core machines, you need a robust wall-clock timer. So I recommend you use omp's wallclock. OMP is included with VC and