microbenchmark

Do java caches results of the methods

自古美人都是妖i 提交于 2019-12-05 10:41:40
I use JMH to specify the complexity of the operation. If you've never worked with JMH, don't worry. JMH will just launch the estimateOperation method multiple times and then get the average time. Question: [narrow] will this program calculate Math.cbrt(Integer.MAX_VALUE) each time? Or it just calculate it once and return cached result afterwards? @GenerateMicroBenchmark public void estimateOperation() { calculate(); } public int calculate() { return Math.cbrt(Integer.MAX_VALUE); } Question: [broad]: Does JVM ever cache the result of the methods? The method return value is never cached .

shade for parameter resource: Cannot find 'resource' in class org.apache.maven.plugins.shade.resource.ManifestResourceTransformer

馋奶兔 提交于 2019-12-05 01:59:11
I'm working on a maven project. I'm trying to integrate jmh benchmarking into my project. The pom.xml of my maven project... <parent> <groupId>platform</groupId> <artifactId>platform-root</artifactId> <version>3.0-SNAPSHOT</version> <relativePath>../../pom.xml</relativePath> </parent> <artifactId>platform-migration</artifactId> <packaging>jar</packaging> <name>Platform Migration</name> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compile.source>1.7</maven.compile.source> <maven.compile.target>1.7</maven.compile.target> <jmh.version>1.1.1</jmh.version>

System.arraycopy with constant length

不羁岁月 提交于 2019-12-04 23:28:23
I'm playing around with JMH ( http://openjdk.java.net/projects/code-tools/jmh/ ) and I just stumbled on a strange result. I'm benchmarking ways to make a shallow copy of an array and I can observe the expected results (that looping through the array is a bad idea and that there is no significant difference between #clone() , System#arraycopy() and Arrays#copyOf() , performance-wise). Except that System#arraycopy() is one-quarter slower when the array's length is hard-coded... Wait, what ? How can this be slower ? Does anyone has an idea of what could be the cause ? The results (throughput): #

Why is String.strip() 5 times faster than String.trim() for blank string In Java 11

半腔热情 提交于 2019-12-04 16:52:34
问题 I've encountered an interesting scenario. For some reason strip() against blank string (contains whitespaces only) significantly faster than trim() in Java 11. Benchmark public class Test { public static final String TEST_STRING = " "; // 3 whitespaces @Benchmark @Warmup(iterations = 10, time = 200, timeUnit = MILLISECONDS) @Measurement(iterations = 20, time = 500, timeUnit = MILLISECONDS) @BenchmarkMode(Mode.Throughput) public void testTrim() { TEST_STRING.trim(); } @Benchmark @Warmup

My python program executes faster than my java version of the same program. What gives?

孤人 提交于 2019-12-03 23:22:40
Update: 2009-05-29 Thanks for all the suggestions and advice. I used your suggestions to make my production code execute 2.5 times faster on average than my best result a couple of days ago. In the end I was able to make the java code the fastest. Lessons: My example code below shows the insertion of primitive ints but the production code is actually storing strings (my bad). When I corrected that the python execution time went from 2.8 seconds to 9.6. So right off the bat, the java was actually faster when storing objects. But it doesn't stop there. I had been executing the java program as

What does autoplot.microbenchmark actually plot?

大憨熊 提交于 2019-12-03 23:17:06
According to the docs, microbenchmark:::autoplot "Uses ggplot2 to produce a more legible graph of microbenchmark timings." Cool! Let's try the example code: library("ggplot2") tm <- microbenchmark(rchisq(100, 0), rchisq(100, 1), rchisq(100, 2), rchisq(100, 3), rchisq(100, 5), times=1000L) autoplot(tm) I don't see anything about the...squishy undulations in the documentation, but my best guess from this answer by the function creator is that this is like a smoothed series of boxplots of the time taken to run, with the upper and lower quartiles connected over the body of the shape. Maybe? These

How can I find the missing value more concisely?

旧城冷巷雨未停 提交于 2019-12-03 18:26:28
问题 The following code checks if x and y are distinct values (the variables x , y , z can only have values a , b , or c ) and if so, sets z to the third character: if x == 'a' and y == 'b' or x == 'b' and y == 'a': z = 'c' elif x == 'b' and y == 'c' or x == 'c' and y == 'b': z = 'a' elif x == 'a' and y == 'c' or x == 'c' and y == 'a': z = 'b' Is is possible to do this in a more, concise, readable and efficient way? 回答1: z = (set(("a", "b", "c")) - set((x, y))).pop() I am assuming that one of the

What can explain the huge performance penalty of writing a reference to a heap location?

安稳与你 提交于 2019-12-03 17:43:41
问题 While investigating the subtler consequences of generational garbage collectors on application performance, I have hit a quite staggering discrepancy in the performance of a very basic operation – a simple write to a heap location – with respect to whether the value written is primitive or a reference. The microbenchmark @OutputTimeUnit(TimeUnit.NANOSECONDS) @BenchmarkMode(Mode.AverageTime) @Warmup(iterations = 1, time = 1) @Measurement(iterations = 3, time = 1) @State(Scope.Thread) @Threads

Why is standard R median function so much slower than a simple C++ alternative?

戏子无情 提交于 2019-12-03 15:03:47
问题 I made the following implementation of the median in C++ and and used it in R via Rcpp : // [[Rcpp::export]] double median2(std::vector<double> x){ double median; size_t size = x.size(); sort(x.begin(), x.end()); if (size % 2 == 0){ median = (x[size / 2 - 1] + x[size / 2]) / 2.0; } else { median = x[size / 2]; } return median; } If I subsequently compare the performance with the standard built-in R median function, I get the following results via microbenchmark > x = rnorm(100) >

What can explain the huge performance penalty of writing a reference to a heap location?

跟風遠走 提交于 2019-12-03 06:24:29
While investigating the subtler consequences of generational garbage collectors on application performance, I have hit a quite staggering discrepancy in the performance of a very basic operation – a simple write to a heap location – with respect to whether the value written is primitive or a reference. The microbenchmark @OutputTimeUnit(TimeUnit.NANOSECONDS) @BenchmarkMode(Mode.AverageTime) @Warmup(iterations = 1, time = 1) @Measurement(iterations = 3, time = 1) @State(Scope.Thread) @Threads(1) @Fork(2) public class Writing { static final int TARGET_SIZE = 1024; static final int[]