jit

My 32 bit headache is now a 64bit migraine?!? (or 64bit .NET CLR Runtime issues)

拜拜、爱过 提交于 2019-12-02 19:39:50
What unusual, unexpected consequences have occurred in terms of performance, memory, etc when switching from running your .NET applications under the 64 bit JIT vs. the 32 bit JIT? I'm interested in the good, but more interested in the surprisingly bad issues people have run into. I am in the process of writing a new .NET application which will be deployed in both 32bit and 64bit. There have been many questions relating to the issues with porting the application - I am unconcerned with the "gotchas" from a programming/porting standpoint . (ie: Handling native/COM interop correctly, reference

Is it true that having lots of small methods helps the JIT compiler optimize?

旧街凉风 提交于 2019-12-02 17:15:57
In a recent discussion about how to optimize some code, I was told that breaking code up into lots of small methods can significantly increase performance, because the JIT compiler doesn't like to optimize large methods. I wasn't sure about this since it seems that the JIT compiler should itself be able to identify self-contained segments of code, irrespective of whether they are in their own method or not. Can anyone confirm or refute this claim? The Hotspot JIT only inlines methods that are less than a certain (configurable) size. So using smaller methods allows more inlining, which is good.

Can I force the JVM to natively compile a given method?

删除回忆录丶 提交于 2019-12-02 17:07:16
I have a performance-critical method called often when my app starts up. Eventually, it gets JIT-compiled, but not after some noticeable time being run in the interpreter. Is there any way I can tell the JVM that I want this method compiled right from the start (without tweaking other internals with stuff like -XX:CompileThreshold )? The only way I know of is the -Xcomp flag, but that is not generally advisable to use. It forces immediate JIT compilation of ALL classes and methods first time they are run. The downside is that you will see a performance decrease on initial startup (due to

Runtime optimization of static languages: JIT for C++?

浪子不回头ぞ 提交于 2019-12-02 15:55:29
Is anyone using JIT tricks to improve the runtime performance of statically compiled languages such as C++? It seems like hotspot analysis and branch prediction based on observations made during runtime could improve the performance of any code, but maybe there's some fundamental strategic reason why making such observations and implementing changes during runtime are only possible in virtual machines. I distinctly recall overhearing C++ compiler writers mutter "you can do that for programs written in C++ too" while listening to dynamic language enthusiasts talk about collecting statistics and

java PrintCompilation output: what's the meaning of “made not entrant” and “made zombie”

五迷三道 提交于 2019-12-02 15:07:41
When running a Java 1.6 (1.6.0_03-b05) app I've added the -XX:+PrintCompilation flag. On the output for some methods, in particular some of those that I know are getting called a lot, I see the text made not entrant and made zombie . What do these mean? Best guess is that it's a decompilation step before recompiling either that method or a dependency with greater optimisation. Is that true? Why "zombie" and "entrant"? Example, with quite a bit of time between some of these lines: [... near the beginning] 42 jsr166y.LinkedTransferQueue::xfer (294 bytes) [... much later] 42 made not entrant

JIT vs Interpreters

浪子不回头ぞ 提交于 2019-12-02 14:10:14
I couldn't find the difference between JIT and Interpreters. Jit is intermediary to Interpreters and Compilers. During runtime, it converts byte code to machine code ( JVM or Actual Machine ?) For the next time, it takes from the cache and runs Am I right? Interpreters will directly execute bytecode without transforming it into machine code. Is that right? How the real processor in our pc will understand the instruction.? Please clear my doubts. First thing first: With JVM, both interpreter and compiler (the JVM compiler and not the source-code compiler like javac) produce native code (aka

.NET JIT potential error?

拟墨画扇 提交于 2019-12-02 13:47:42
The following code gives different output when running the release inside Visual Studio, and running the release outside Visual Studio. I'm using Visual Studio 2008 and targeting .NET 3.5. I've also tried .NET 3.5 SP1. When running outside Visual Studio, the JIT should kick in. Either (a) there's something subtle going on with C# that I'm missing or (b) the JIT is actually in error. I'm doubtful that the JIT can go wrong, but I'm running out of other possiblities... Output when running inside Visual Studio: 0 0, 0 1, 1 0, 1 1, Output when running release outside of Visual Studio: 0 2, 0 2, 1 2

Surprising CLR / JIT? behaviour - deferred initialization of a local variable

ぃ、小莉子 提交于 2019-12-02 11:48:48
问题 I have just encountered something quite bizarre running an app in Debug mode ( VS 2008 Express , Any Cpu ). I would appreciate if someone enlightened me as to what is happening here? // PredefinedSizeGroupMappings is null here Dictionary<string, int> groupIDs = PredefinedSizeGroupMappings ?? new Dictionary<string, int>(); // so groupIDs is now initialized as an empty Dictionary<string, int>, as expected // now: PredefinedSizesMappings is null here - therefore I expect sizeIds // to be

How do I make gmpy array operations faster?

守給你的承諾、 提交于 2019-12-02 07:30:50
I've been having trouble with speed while trying to utilise the gmpy module. import numpy as np import gmpy2 as gm N = 1000 a = range(N) %timeit [gm.sin(x) for x in a] # 100 loops, best of 3: 7.39 ms per loop %timeit np.sin(a) # 10000 loops, best of 3: 198 us per loop I was wondering if I could somehow speed this computation. I was thinking JIT or multiprocessing might help but I haven't figured out how to do it. Any help would be greatly appreciated. If you want me to post more information please let me know. I was curious to see how much performance increase would be possible so wrote a new

The Anaconda prompt freezes when I run code with numba's “jit” decorator

痞子三分冷 提交于 2019-12-02 04:27:23
I have this python code that should run just fine. I'm running it on Anaconda's Spyder Ipython console, or on the Anaconda terminal itself, because that is the only way I can use the "numba" library and its "jit" decorator. However, either one always "freezes" or "hangs" just about whenever I run it. There is nothing wrong with the code itself, or else I'd get an error. Sometimes, the code runs all the way through perfectly fine, sometimes it just prints the first line from the first function, and sometimes the code stops anywhere in the middle. I've tried seeing under which conditions the