benchmarking

Benchmarking: using `expression` `quote` or neither

99封情书 提交于 2019-11-29 09:54:43
Generally, when I run benchmarks, I wrap my statements in expression . Recently, it was suggested to either (a) not do so or (b) use quote instead of expression. I find two advantages to wrapping the statements: compared to entire statements, they are more easily swapped out. I can lapply over a list of inputs, and compare those results However, in exploring the different methods, I noticed a discrepency between the three methods (wrapping in expression , wrapping in quote , or not wrapping at all) The question is: Why the discrepency? (it appears that wrapping in quote does not actually

Are .NET 4.0 Runtime slower than .NET 2.0 Runtime?

拈花ヽ惹草 提交于 2019-11-29 06:24:16
问题 After I upgraded my projects to .NET 4.0 (With VS2010) I realized than they run slower than they were in .NET 2.0 (VS2008). So i decided to benchmark a simple console application in both VS2008 & VS2010 with various Target Frameworks: using System; using System.Diagnostics; using System.Reflection; namespace RuntimePerfTest { class Program { static void Main(string[] args) { Console.WriteLine(Assembly.GetCallingAssembly().ImageRuntimeVersion); Stopwatch sw = new Stopwatch(); while (true) { sw

Are jQuery's :first and :eq(0) selectors functionally equivalent?

本秂侑毒 提交于 2019-11-29 05:40:06
I'm not sure whether to use :first or :eq(0) in a selector. I'm pretty sure that they'll always return the same object, but is one speedier than the other? I'm sure someone here must have benchmarked these selectors before and I'm not really sure the best way to test if one is faster. Update: here's the bench I ran: /* start bench */ for (var count = 0; count < 5; count++) { var i = 0, limit = 10000; var start, end; start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $thisFrame.parents("div.RadEditor.Telerik:eq(0)"); } end = new Date(); alert("div.RadEditor.Telerik:eq(0) : " +

What does allocs/op and B/op mean in go benchmark?

落爺英雄遲暮 提交于 2019-11-29 04:56:48
问题 When I run my benchmarks with go test -v -bench=. -benchmem , I see the following results. f1 10000 120860 ns/op 2433 B/op 28 allocs/op f2 10000 120288 ns/op 2288 B/op 26 allocs/op Based on my understanding: 10000 is the number of iterations for i := 0; i < b.N; i++ { . XXX ns/op is approximate time it took for one iteration to complete But even after reading the docs, I can not find out what B/op and allocs/op mean. My guess is that allocs/op has something to do with garbage collection and

Does anyone have considerable proof that CHAR is faster than VARCHAR?

一笑奈何 提交于 2019-11-29 04:02:26
Any benchmark, graph anything at all ? Its all academic and theoretical across the web. Ok its not the first time that this question has been asked, they all say that using CHAR results in faster selects? I even read in MySQL books, its all the same but I have not come across any benchmark that proves this. Can any one shed some light over this? This is simple logic, to simplify I'll take the example of a CSV file... would it be faster to search in this line 1231;231;32345;21312;23435552;1231;1;243;211;3525321;44343112; or this one 12;23;43;54;56;76;54;83;45;91;28;92 as long as you define your

Why push method is significantly slower than putting values via array indices in Javascript

只谈情不闲聊 提交于 2019-11-29 03:58:05
I pretty don't understand why this test : http://jsperf.com/push-method-vs-setting-via-key Shows that a.push(Math.random()); is over ten times slower than a[i] = Math.random(); Could you explain why this is the case ? What magic "push" do that make it so slow ? (or so slow compared to other valid method of doing that). EDIT NOTE: The push test is biased. I increase size of the array every iteration! Read carefully accepted answer! Bergi Could you explain why this is the case? Because your test is flawed. The push does always append to the existing a array making it much larger, while the

Why is python so much slower on windows?

对着背影说爱祢 提交于 2019-11-29 03:35:12
I learned about pystones today and so I decided to see what my various environments were like. I ran pystones on my laptop that is running windows on the bare metal and got these results Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from test import pystone >>> for i in range(0,10): ... pystone.pystones() ... (1.636334799754252, 30556.094026423627) (2.1157907919853756, 23631.82607155689) (2.5324817108003685, 19743.479207278437) (2.541626695533182, 19672.4405231788) (2.536022267835051

High-Performance Timer vs StopWatch

喜夏-厌秋 提交于 2019-11-29 00:58:30
Does anyone know if the HiPerfTimer or the StopWatch class is better for benchmarking, and why? Shay Erlichmen Stopwatch is based on High resolution timer (where available), you can check that with IsHighResolution They are the same when it comes to high resolution timing. Both use this: [DllImport("Kernel32.dll")] private static extern bool QueryPerformanceCounter(out long PerformanceCount); and this: [DllImport("Kernel32.dll")] private static extern bool QueryPerformanceFrequency(out long Frequency); to do the underlying timing. (You can verify this with Reflector.NET). I'd use StopWatch

Clojure number crunching performance

此生再无相见时 提交于 2019-11-29 00:18:15
问题 I'm not sure whether this belongs on StackOverflow or in the Clojure Google group. But the group seems to be busy discussing numeric improvements for Clojure 1.2, so I'll try here: http://shootout.alioth.debian.org/ has a number of performance benchmarks for various languages. I noticed that Clojure was missing, so I made a Clojure version of the n-body problem. The fastest code I was able to produce can be found here, and benchmarking it seems to be saying that for number crunching Clojure

What does CPU Time for a Hadoop Job signify?

為{幸葍}努か 提交于 2019-11-29 00:16:16
I am afraid I do not understand the timing results of a Map-Reduce job. For example, a job I am running gives me the following results from the job tracker. Finished in: 1mins, 39sec CPU time spent (ms) 150,460 152,030 302,490 The entries in CPU time spent (ms) are for Map, Reduce and Total respectively. But, then how is "CPU time spent" being measured and what does it signify? Is this the total cumulative time spent in each of the mappers and reducers assigned for the job? Is it possible to measure other times from the framework such as time for shuffle, sort, partition etc? If so, how? A