I was just watching the Google IO videos and they talked about the JIT compiler that they included in the android. They showed a demo of performance improvements thanks to t
JIT = Just in Time. It's a process whereby a program that would otherwise be interpreted (e.g. Java bytecodes or Javascript code) is converted to native machine code on the fly as it runs to improve performance.
Some of the benefits are the JIT compiler can see the hotspots and apply more aggressive optimisations, it can also take advantage of any extensions the current processor has like SSE2.
This question may have more info: How a JIT compiler helps performance of applications?
It's a just-in-time compiler, half-way between an interpreter and a compiler (i.e. it compiles, but only just before the code is executed).
This enables optimisations to the compilation using dynamic information only known at runtime (as compilers usually run statically, and so only have access to compile-time information). They're a lot harder to write, but can give great performance improvements.
For more information, as always, see Wikipedia:
In computing, just-in-time compilation (JIT), also known as dynamic translation, is a technique for improving the runtime performance of a computer program. Traditionally, computer programs had two modes of runtime operation, either interpreted or static (ahead-of-time) compilation. Interpreted code was translated from a high-level language to a machine code continuously during every execution, whereas static compilation translated code into machine code before execution, and only required this translation once.
JIT compilers represented a hybrid approach, with translation occurring continuously, as with interpreters, but with caching of translated code to minimize performance degradation. It also offers other advantages over statically compiled code at development time, such as handling of late-bound data types and the ability to enforce security guarantees.
A JIT compiler is usually the last part of a VM's pipeline and generates machine code from the VM's intermediary language.
It improves speed by optimizing the generated code for the environment it runs on (CPU specific instructions, cache size,...).
Traditional compilers also optimize the generated machine code but they have to do it without being aware of the specific resources that will be available at runtime.
The Just-In-Time (JIT) compiler is a component of the Java™ Runtime Environment that improves the performance of Java applications at run time.
Refer to the IBM documentation here.
The interpreter basically does this:
That is simple, it works well, and it will run your Java program. But it's also inefficient, because looking up the native instruction(s) for every single bytecode to be executed costs processing time. So the JVM contains a second mechanism, the Just-In-Time compiler, which basically does this:
After it has converted the bytecodes for a method to native machine instructions, the JVM remembers that native code, so that the next time the method has to be run, it can simply run the native instructions - it doesn't need to convert the bytecodes every time the method is run. This makes the program run much faster.
Also, the JIT performs lots of optimizations to generate native code that runs as fast as possible.
Java code is normally distributed as bytecode, which is machine-independent pseudocode. (The same idea was previously used in UCSD-p system developed in the 70'ies.) The advantage of this is that the same application can be run in different processors and operating systems. In addition, the bytecode is often smaller than compiled application.
The disadvantage is that interpreting the code is slow compared to running compiled code. To solve this problem, JIT compiler was developed. JIT compiler compiles the code into machine code just before the code is executed. This speeds up the execution compared to interpreter, but additional time is spent for compiling every time the program is run. In addition, since JIT compiler must compile fast, it can not use complex optimization techniques that are used in static compilers.
Another approach is HotSpot compiling. It initially runs as interpreter, but then detects which routines are used most often and compiles only those. The advantage is that there is no initial delay due to the compiling. In addition, HotSpot compiler may do profiling during the execution and then issue stronger optimization for the most important routines. It may even gather information so that when you run the same application again and again, it will run faster and faster. More information about HotSpot compiling can be found from this article (tnx Pangea for the link).
Of course, instead of using JIT compiler, you could just use a static compiler to compile the bytecode for your machine. This allows full optimization and then you do not need to compile again every time you run the application. However, in phones and web pages, you often just execute the code (or applet) once, so JIT compiler may be a better choice.
Update
Python bytecode files have extension .py. When you execute the bytecode file, Python JIT compiler produces compiled file .pyc. Next time you run the same program, if the .py file has not changed, there is no need to compile it again but instead Python runs the previously compiled .pyc file. This speeds up the start of the program.
So, What is the duty of a JIT compiler?
As others mentioned it's short for "Just In Time", which in this sense means "Just in time for execution".
When you compile programs to machine code, they are often targeted to a certain platform.
Therefore JIT was "Invented", you want to compile the last bits and pieces of your code for the executing platform, so see this as an "before exection"-compiler/optimizer.
JIT makes our lives easier and makes (hopefully) our applications run faster.
There are of course other purposes of JIT-compilation, the above is just one of them.