On Interpters, Optimizing Compilers, and JIT
Summary: Describe at a high level how interpreters, static compilers, and JIT differ in their ability to optimize code.
Optimizing compilers are awesome.
Compilers and Interpreters
An interpreter is like a blind man. It can only feel the instruction right in front of it. It grabs the instruction and executes it. It sees a = Ackermann(4, 2)
and evaluates it. But it turns out, a
isn’t ever used.
A compiler can read the entire source code, notice that a
is never used, and just delete the entire a = Ackermann(4, 2)
computation.
This is called dead code elimination. Other common optimization strategies include constant folding, inlining, loop unrolling, and tail call elimination.
In general, we can think of an optimizing compiler as a reasoning engine that searches for a better program that is functionally equivalent to the behavior your source code specifies.
A program can be better by running faster, using less memory, producing a smaller binary, or in the case of tail call elimination, works at all.
JIT compilers
Just In Time (JIT) compilers are even more awesome. We as programmers use profilers to check which part of the code we need to optimize. Why not have a program do this? Dynamically at runtime? Taking into account what data we are actually processing?
That’s exactly what JITs do. In theory inlining calls, and unrolling loops, just make your program faster. But it trades off against your code size. It makes your binaries larger, and (I am somehow unsure about this point) your code might no longer fit into the CPU cache.
A JIT can run a lightweight profiler while the program is running and identify hot code (code that is executed very often). It then focuses its optimization ability on this hot code.
But it gets better. Imagine we have some numerical overflow detection in our code. If the numbers in our current input are so small that there is no chance of triggering an overflow, then in principle the JIT could notice this and disable the overflow detection logic.
In general, a JIT can monitor the inputs received. It then infers what logical predicates hold on these inputs. Now it can generate code that assumes that these predicates hold. This potentially allows the JIT to generate more efficient code, which also includes a check that in fact the current input does satisfy the predicates.
A static compiler simply can’t perform this type of optimization.
You may wonder why you were downvoted. I guess it is because this post feels like a knowledge-base fragment without context. LW is not a dictionary. Why is this relevant for the LW readers? What is your angle?
JITs are very underrated. Bad JavaScript code runs so much faster than bad C code. Sorry you got downvoted.