So if you run an optimizing compiler on its own source code, and then use the product to do the same again, it should produce the same output on both occasions—at most, the first-order product will run faster than the original compiler.
Now if you are one of those annoying nitpicky types, like me, you will notice a flaw in this logic:
As one of those annoying nitpicky types I think it is perhaps interesting to note that the current highest level of optimization available with the Microsoft C++ compilers is PGO or Profile Guided Optimization. The basic idea is to collect data from actual runs of the program and feed this back into the optimizer to guide its optimization decisions. Adaptive Optimization is the same basic idea applied to optimize code while it is running based on live performance data.
As you might predict, the speedups achievable by such techniques are worthwhile but modest and not FOOM-worthy.
As one of those annoying nitpicky types I think it is perhaps interesting to note that the current highest level of optimization available with the Microsoft C++ compilers is PGO or Profile Guided Optimization. The basic idea is to collect data from actual runs of the program and feed this back into the optimizer to guide its optimization decisions. Adaptive Optimization is the same basic idea applied to optimize code while it is running based on live performance data.
As you might predict, the speedups achievable by such techniques are worthwhile but modest and not FOOM-worthy.