Are there scenarios where JIT compiler is faster than other compilers like C++?
Do you think in the future JIT compiler will just see minor optimizations, features b
JIT compilers have more data they can use to influence optimizations. Of course, someone actually has to write code to use that data, so it's not a simple as that.
Basically, JIT compilers has a chance to actually profile the application being run, and do some hinting based on that information. "offline" compilers will not be able to determine how often a branch jumps and how often it falls through, without inserting special code, ask the dev to run the program, put it through its paces, and recompile.
Why does this matter?
//code before
if(errorCondition)
{
//error handling
}
//code after
Gets converted into something like:
//code before
Branch if not error to Code After
//error handling
Code After:
//Code After
And x86 processors would not predict a conditional jump ahead in the absence of information from the branch prediction unit. That means that it predicts the error handling code to run, and the processor's going to have to flush the pipeline when it figures out that the error condition didn't occur.
A JIT compiler could see that, and insert a hint for the branch, so that the CPU would predict in the correct direction. Granted, offline compilers can structure the code in a way that would avoid the mispredict, but if you ever need to look at the assembly, you might not like it jumping around everywhere....
Yes, there certainly are such scenarios.
I think there will be breakthroughs in the future. In particular, I think that the combination of JIT compilation and dynamic typing will be significantly improved. We are already seeing this in the JavaScript space with Chrome's V8 and TraceMonkey. I expect to see other improvements of similar magnitude in the not-too-distant future. This is important because even so-called "statically typed" languages tend to have a number of dynamic features.
One advantage of JITing which has not yet been mentioned is that it's possible for a program to define an infinite number of generic types. For example:
interface IGenericAction { bool Act<T>(); }
struct Blah<T>
{
public static void ActUpon(IGenericAction action)
{
if (action.Act<T>())
Blah<Blah<T>>.ActUpon(action);
}
}
Calling Blah<Int32>.ActUpon(act)
will call act.Act<Int32>()
. If that method returns true, it will call Blah<Blah<Int32>>.ActUpon(act)
, which will in turn call act.Act<Blah<Int32>>()
. If that returns true, more calls will be performed with an even more-deeply-nested type. Generating code for all the ActUpon
methods that could be called would be impossible, but fortunately it's not necessary. Types don't need to be generated until they are used. If action<Blah<...50 levels deep...>>.Act()
returns false, then Blah<Blah<...50 levels deep...>>.ActUpon
won't call Blah<Blah<...51 levels deep...>>.ActUpon
and the latter type won't need to be created.
Yes, JIT compilers can produce faster Machine Code optimized for the current environment. But practically VM programs are slower than Native programs because JITing itself consumes time (more Optimization == more time), and for many methods JITing them may consume more time than executing them. And that's why GAC is introduced in .NET
A side effect for JITing is large memory consumption. However that's not related to computation speed, it may slow down the whole program execution, because large memory consumption increases the probability that your code will be paged out to the secondary storage.
Excuse me for my bad English.