benchmarking

Benchmarking: using `expression` `quote` or neither

一世执手 提交于 2019-11-28 03:16:32
问题 Generally, when I run benchmarks, I wrap my statements in expression . Recently, it was suggested to either (a) not do so or (b) use quote instead of expression. I find two advantages to wrapping the statements: compared to entire statements, they are more easily swapped out. I can lapply over a list of inputs, and compare those results However, in exploring the different methods, I noticed a discrepency between the three methods (wrapping in expression , wrapping in quote , or not wrapping

How to use python timeit when passing variables to functions?

此生再无相见时 提交于 2019-11-28 03:01:35
I'm struggling with this using timeit and was wondering if anyone had any tips Basically I have a function(that I pass a value to) that I want to test the speed of and created this: if __name__=='__main__': from timeit import Timer t = Timer(superMegaIntenseFunction(10)) print t.timeit(number=1) but when I run it, I get weird errors like coming from the timeit module.: ValueError: stmt is neither a string nor callable If I run the function on its own, it works fine. Its when I wrap it in the time it module, I get the errors(I have tried using double quotes and without..sameoutput). any

Why is splitting a string slower in C++ than Python?

自闭症网瘾萝莉.ら 提交于 2019-11-28 02:45:25
I'm trying to convert some code from Python to C++ in an effort to gain a little bit of speed and sharpen my rusty C++ skills. Yesterday I was shocked when a naive implementation of reading lines from stdin was much faster in Python than C++ (see this ). Today, I finally figured out how to split a string in C++ with merging delimiters (similar semantics to python's split()), and am now experiencing deja vu! My C++ code takes much longer to do the work (though not an order of magnitude more, as was the case for yesterday's lesson). Python Code: #!/usr/bin/env python from __future__ import print

How much faster is Redis than mongoDB?

拈花ヽ惹草 提交于 2019-11-28 02:33:41
It's widely mentioned that Redis is "Blazing Fast" and mongoDB is fast too. But, I'm having trouble finding actual numbers comparing the results of the two. Given similar configurations, features and operations (and maybe showing how the factor changes with different configurations and operations), etc, is Redis 10x faster?, 2x faster?, 5x faster? I'm ONLY speaking of performance. I understand that mongoDB is a different tool and has a richer feature set. This is not the "Is mongoDB better than Redis" debate. I'm asking, by what margin does Redis outperform mongoDB? At this point, even cheap

What is an idiomatic way to have shared utility functions for integration tests and benchmarks?

删除回忆录丶 提交于 2019-11-28 01:30:30
I have Rust project with both integration tests (in the /tests dir) and benchmarks (in the /benches dir). There are a couple of utility functions that I need in tests and benches, but they aren't related to my crate itself, so I can't just put them in the /utils dir. What is idiomatic way to handle this situation? Create a shared crate (preferred) As stated in the comments, create a new crate. You don't have to publish the crate to crates.io . Just keep it as a local unpublished crate inside your project and mark it as a development-only dependency : . ├── Cargo.toml ├── src │ └── lib.rs ├──

std::chrono::clock, hardware clock and cycle count

本秂侑毒 提交于 2019-11-28 01:30:22
std::chrono offer several clocks to measure times. At the same time, I guess the only way a cpu can evaluate time, is by counting cycles. Question 1: Does a cpu or a gpu has any other way to evaluate time than by counting cycles? If that is the case, because the way a computer count cycles will never be as precise as an atomic clock, it means that a "second" ( period = std::ratio<1> ) for a computer can be actually shorter or bigger than an actual second, causing differences in the long run for time measurements between the computer clock and let's say GPS. Question 2: Is that correct? Some

Java benchmarking tool

家住魔仙堡 提交于 2019-11-27 22:25:57
I have written a small java application for which I need to obtain performance metrics such as memory usage, running time etc., Is there any simple to use performance measurement tool available? Yourkit is pretty good (free 30 day trial). Eclipse also has built in TPTP tools . Apache JMeter has a ton of features, for benchmarking http requests, JDBC calls, web services, JMS, mail, regular Java requests, etc. For runtime metrics, use any profiler such as VisualVM , Netbeans Profiler, or the Eclipse TPTP Tools. A profiler usually gives you more fine-grained metrics such as the runtime for

Does anyone have considerable proof that CHAR is faster than VARCHAR?

核能气质少年 提交于 2019-11-27 22:14:54
问题 Any benchmark, graph anything at all ? Its all academic and theoretical across the web. Ok its not the first time that this question has been asked, they all say that using CHAR results in faster selects? I even read in MySQL books, its all the same but I have not come across any benchmark that proves this. Can any one shed some light over this? 回答1: This is simple logic, to simplify I'll take the example of a CSV file... would it be faster to search in this line 1231;231;32345;21312;23435552

How to benchmark Boost Spirit Parser?

拥有回忆 提交于 2019-11-27 21:21:12
I'm working on a compiler and I would like to improve its performances. I found that about 50% of the time is spent parsing the source files. As the source file are quite small and I do quite a lot of transformations after that, it seems to me that it is perfectible. My parser is a Boost Spirit parser with a lexer (with lexer::pos_iterator) and I have a middle sized grammar. I'm parsing the source into an AST. My problem is that I have no idea what takes the most time during the parsing: copies of AST nodes, lexer, parser rules or memory. I don't think that it is I/O problem since I'm working

How not to optimize away - mechanics of a folly function

半世苍凉 提交于 2019-11-27 21:12:31
问题 I was searching for a programming technique that would ensure variables used for benchmarking (without observable side effects) won't be optimized away by the compiler This gives some info, but I ended up using folly and the following function /** * Call doNotOptimizeAway(var) against variables that you use for * benchmarking but otherwise are useless. The compiler tends to do a * good job at eliminating unused variables, and this function fools * it into thinking var is in fact needed. */