Secondly, we manage to routinely prove facts about the behavior of programs (this is the field of program analysis) despite the fact that in theory this should be “undecidable”. This is because the undecidability issues don’t really crop up in practice.
As someone with a reasonable acquaintance with program analysis, synthesis, and semantics… YIKES.
Rice’s Theorem is, so to speak, the biggest, nastiest rock we semantics folks have to crawl around on a regular basis. The way we generally do it is by constructing algorithms, semantic frameworks, and even entire programming languages in which undecidability cannot happen in the first place, thus restricting ourselves to analyzing something less than the set of all possible programs.
Now, admittedly, in practice we’ve made a lot of progress this way, because in practice there are really four kinds of programs: ones that provably terminate by design, ones that provably don’t terminate by design, provably undecidable programs (usually programs that rely on the halting behavior of some other program or logic for their own halting behavior), and just plain messed-up what-the-fuck programs.
The last kind are mostly created only by mistake. The third kind come up in program analysis and semantics, but we can usually construct a proof that we’re dealing with a formally undecidable problem there and set reasonable hard bounds on length of iteration or depth of recursion (or even find decidable subclasses of these problems that are decently useful to real people). The second kind is handled by writing coterminating programs over codata. The first kind is handled by writing terminating programs over data.
Undecidability issues do come up in practice, and the current research frontier (MIRI’s Lobian paper, Goedel Machines, AIXI) certainly indicates that these issues definitely come up in the study of Universal Artificial Intelligence. However, for most problems below the level of program analysis or universal induction, undecidability issues can be handled or contained productively by research effort.
As someone with a reasonable acquaintance with program analysis, synthesis, and semantics… YIKES.
Rice’s Theorem is, so to speak, the biggest, nastiest rock we semantics folks have to crawl around on a regular basis. The way we generally do it is by constructing algorithms, semantic frameworks, and even entire programming languages in which undecidability cannot happen in the first place, thus restricting ourselves to analyzing something less than the set of all possible programs.
Now, admittedly, in practice we’ve made a lot of progress this way, because in practice there are really four kinds of programs: ones that provably terminate by design, ones that provably don’t terminate by design, provably undecidable programs (usually programs that rely on the halting behavior of some other program or logic for their own halting behavior), and just plain messed-up what-the-fuck programs.
The last kind are mostly created only by mistake. The third kind come up in program analysis and semantics, but we can usually construct a proof that we’re dealing with a formally undecidable problem there and set reasonable hard bounds on length of iteration or depth of recursion (or even find decidable subclasses of these problems that are decently useful to real people). The second kind is handled by writing coterminating programs over codata. The first kind is handled by writing terminating programs over data.
Undecidability issues do come up in practice, and the current research frontier (MIRI’s Lobian paper, Goedel Machines, AIXI) certainly indicates that these issues definitely come up in the study of Universal Artificial Intelligence. However, for most problems below the level of program analysis or universal induction, undecidability issues can be handled or contained productively by research effort.