I want to point out that one other big reason for static analysis being incomplete in practice is that it’s basically impossible to get completeness of static analysis for lots of IRL stuff, even in limited areas without huge discoveries in physics that would demand extraordinary evidence, and the best example of this is Rice’s theorem:
Which is a huge limiter to how much we can perform static analysis IRL, though a more relevant result would probably be the Co-NP completeness result for Tautology problems, which again are related to static analysis.
Rice’s theorem or halting problem are completely irrelevant in practice as sources of difficulty. Take a look at the proofs. The program being analyzed would basically need to reason about the static analyzer, and then act contrary to the analyzer’s expectation. Programs you find in the real world don’t do that. Also they wouldn’t know (care about) the specific analyzer enough to anticipate its predictions.
Halting problem is a thing because there exists such an ornery program that makes a deliberate effort so that it can’t be predicted specifically by this analyzer. But it’s not something that happens on its own in the real world, not unless that program is intelligent and wants to make itself less vulnerable to analyzer’s insight.
The point is that it can get really, really hard for a static analyzer to be complete if you ask for enough generality in your static analyzer.
The proof basically works by showing that if you figured out a way to say, automatically find bugs in programs and making sure the program meets the specification, or figuring out whether a program actually is platonically implementing the square function infallibly, or any other program that identifies non-trivial, semantic properties, we could convert it into a program that solves the halting problem, and thus the program must be at least be able to solve all recursively enumerable problems.
For a more in practice example of static analysis being hard, I’d say a lot of NP and Co-NP completeness results of lots of problems, or even PSPACE-completeness for problems like model checking show that unless huge assumptions are made about physics, completeness of static analysis is a pipe dream for even limited areas.
Static analysis will likely be very hard for a long time to come.
My objection is to “IRL” and “in practice” in your top level comment, as if this is what static analysis is actually concerned with. In various formal questions, halting problem is a big deal, and I expect the same shape to be important in decision theory of the future (which might be an invention that humans will be too late to be first to develop). Being cooperative about loops of mutual prediction seems crucial for (acausal) coordination, the halting problem just observes that the same ingredients can be used to destroy all possibility of coordination.
I want to point out that one other big reason for static analysis being incomplete in practice is that it’s basically impossible to get completeness of static analysis for lots of IRL stuff, even in limited areas without huge discoveries in physics that would demand extraordinary evidence, and the best example of this is Rice’s theorem:
https://en.wikipedia.org/wiki/Rice’s_theorem
Which is a huge limiter to how much we can perform static analysis IRL, though a more relevant result would probably be the Co-NP completeness result for Tautology problems, which again are related to static analysis.
Rice’s theorem or halting problem are completely irrelevant in practice as sources of difficulty. Take a look at the proofs. The program being analyzed would basically need to reason about the static analyzer, and then act contrary to the analyzer’s expectation. Programs you find in the real world don’t do that. Also they wouldn’t know (care about) the specific analyzer enough to anticipate its predictions.
Halting problem is a thing because there exists such an ornery program that makes a deliberate effort so that it can’t be predicted specifically by this analyzer. But it’s not something that happens on its own in the real world, not unless that program is intelligent and wants to make itself less vulnerable to analyzer’s insight.
The point is that it can get really, really hard for a static analyzer to be complete if you ask for enough generality in your static analyzer.
The proof basically works by showing that if you figured out a way to say, automatically find bugs in programs and making sure the program meets the specification, or figuring out whether a program actually is platonically implementing the square function infallibly, or any other program that identifies non-trivial, semantic properties, we could convert it into a program that solves the halting problem, and thus the program must be at least be able to solve all recursively enumerable problems.
For a more in practice example of static analysis being hard, I’d say a lot of NP and Co-NP completeness results of lots of problems, or even PSPACE-completeness for problems like model checking show that unless huge assumptions are made about physics, completeness of static analysis is a pipe dream for even limited areas.
Static analysis will likely be very hard for a long time to come.
My objection is to “IRL” and “in practice” in your top level comment, as if this is what static analysis is actually concerned with. In various formal questions, halting problem is a big deal, and I expect the same shape to be important in decision theory of the future (which might be an invention that humans will be too late to be first to develop). Being cooperative about loops of mutual prediction seems crucial for (acausal) coordination, the halting problem just observes that the same ingredients can be used to destroy all possibility of coordination.