benchmarking

Why is dumping with `pickle` much faster than `json`?

。_饼干妹妹 提交于 2019-12-02 06:03:47
问题 This is for Python 3.6. Edited and removed a lot of stuff that turned out to be irrelevant. I had thought json was faster than pickle and other answers and comments on Stack Overflow make it seem like a lot of other people believe this as well. Is my test kosher? The disparity is much larger than I expected. I get the same results testing on very large objects. import json import pickle import timeit file_name = 'foo' num_tests = 100000 obj = {1: 1} command = 'pickle.dumps(obj)' setup = 'from

Rust benchmark optimized out

蹲街弑〆低调 提交于 2019-12-02 03:45:28
I am trying to benchmark getting keys from a Rust hash map. I have the following benchmark: #[bench] fn rust_get(b: &mut Bencher) { let (hash, keys) = get_random_hash::<HashMap<String, usize>>(&HashMap::with_capacity, &rust_insert_fn); let mut keys = test::black_box(keys); b.iter(|| { for k in keys.drain(..) { hash.get(&k); } }); } where get_random_hash is defined as: fn get_random_hash<T>( new: &Fn(usize) -> T, insert: &Fn(&mut T, String, usize) -> (), ) -> (T, Vec<String>) { let mut keys = Vec::with_capacity(HASH_SIZE); let mut hash = new(HASH_CAPACITY); for i in 0..HASH_SIZE { let k: String

Global / local environment affects Haskell's Criterion benchmarks results

≯℡__Kan透↙ 提交于 2019-12-02 03:44:22
问题 We're benchmarking some Haskell code in our company and we've just hit a very strange case. Here is a code, which benchmarks the same thing 2 times. The former one uses an Criterion.env which is created for all the tests once, the later creates env for every test. This is the only difference, however the one which creates env for each bench, runs 5 times faster. Does anyone know what can cause it? Minimal example: module Main where import Prelude import Control.Monad import qualified Data

PyPy displaying inaccurate benchmark results?

会有一股神秘感。 提交于 2019-12-02 03:31:11
问题 I was working on Project Euler and wondered if I could speed up my solution using PyPy. However, I found results quite disappointing, as it took more time to compute. d:\projeuler>pypy problem204.py 3462.08630405 mseconds d:\projeuler>python problem204.py 1823.91602542 mseconds Since mseconds output were calculated using python's time modules, so I ran it again using builtin benchmark commands. d:\projeuler>pypy -mtimeit -s "import problem204" "problem204._main()" 10 loops, best of 3: 465

Measuring the time of PHP scripts - Using $_SERVER['REQUEST_TIME']

ぃ、小莉子 提交于 2019-12-02 02:27:49
问题 Are this methods a reliable way to measure a script: $time = ($_SERVER['REQUEST_TIME_FLOAT'] - $_SERVER['REQUEST_TIME']); or $time = (microtime(true) - $_SERVER['REQUEST_TIME_FLOAT']); Which one should be used? And what's the difference of each one? They return very different measurements. 回答1: $time = ($_SERVER['REQUEST_TIME_FLOAT'] - $_SERVER['REQUEST_TIME']); This will never give you execution time of you PHP script. Because both the values are used for storing start of request . The

Why is dumping with `pickle` much faster than `json`?

谁说胖子不能爱 提交于 2019-12-02 02:23:44
This is for Python 3.6. Edited and removed a lot of stuff that turned out to be irrelevant. I had thought json was faster than pickle and other answers and comments on Stack Overflow make it seem like a lot of other people believe this as well. Is my test kosher? The disparity is much larger than I expected. I get the same results testing on very large objects. import json import pickle import timeit file_name = 'foo' num_tests = 100000 obj = {1: 1} command = 'pickle.dumps(obj)' setup = 'from __main__ import pickle, obj' result = timeit.timeit(command, setup=setup, number=num_tests) print(

Global / local environment affects Haskell's Criterion benchmarks results

China☆狼群 提交于 2019-12-02 00:18:56
We're benchmarking some Haskell code in our company and we've just hit a very strange case. Here is a code, which benchmarks the same thing 2 times. The former one uses an Criterion.env which is created for all the tests once, the later creates env for every test. This is the only difference, however the one which creates env for each bench, runs 5 times faster. Does anyone know what can cause it? Minimal example: module Main where import Prelude import Control.Monad import qualified Data.Vector.Storable.Mutable as Vector import qualified Data.Vector.Storable as Vector import Data.Vector

Are Java 6's performance improvements in the JDK, JVM, or both?

淺唱寂寞╮ 提交于 2019-12-02 00:05:26
I've been wondering about the performance improvements touted in Java SE 6 - is it in the compiler or the runtime? Put another way, would a Java 5 application compiled by JDK 6 see an improvement run under JSE 5 (indicating improved compiler optimization)? Would a Java 5 application compiled by JDK 5 see an improvement run under JSE 6 (indicating improved runtime optimization)? I've noticed that compiling under JDK 6 takes almost twice as long as it did under JDK 5 for the exact same codebase; I'm hoping that at least some of that extra time is being spent on compiler optimizations, hopefully

Benchmark C++ vs Java, Unrealistic results

浪尽此生 提交于 2019-12-01 20:49:07
I did a simple test, I know C++ is faster but the results of my test is unrealistic. C++ code is: #include <stdio.h> #include <windows.h> unsigned long long s(unsigned long long n) { unsigned long long s = 0; for (unsigned long long i = 0; i < n; i++) s += i; return s; } int main() { LARGE_INTEGER freq, start, end; QueryPerformanceFrequency(&freq); QueryPerformanceCounter(&start); printf("%llu\n", s(1000000000)); QueryPerformanceCounter(&end); double d = (double) (end.QuadPart - start.QuadPart) / freq.QuadPart * 1000.0; printf("Delta: %f\n", d); return 0; } Java code is: public class

Memory benchmark plot: understanding cache behaviour

╄→尐↘猪︶ㄣ 提交于 2019-12-01 18:59:55
I've tried every kind of reasoning I can possibly came out with but I don't really understand this plot. It basically shows the performance of reading and writing from different size array with different stride. I understand that for small stride like 4 bytes I read all the cell in the cache, consequently I have good performance. But what happen when I have the 2 MB array and the 4k stride? or the 4M and 4k stride? Why the performance are so bad? Finally why when I have 1MB array and the stride is 1/8 of the size performance are decent, when is 1/4 the size performance get worst and then at