So, you can compress a list of observations about which Turing machines halt by starting with a uniform prior over Chaitin’s omega. This can lead to quite a lot of compression: the information of whether the first n Turing machines halt consists of n bits, but only requires log(n) bits of Chaitin’s omega. If we saw whether more Turing machines halted, we would also uncover more bits of Chaitin’s omega. Is this the kind of thing you are thinking of?
I guess there’s another question of how any of this makes sense if the universe is computable. We can still use information about which Turing machines halt in part of our generative model for a computable universe, even though “x doesn’t halt” is never actually observed.
Perhaps you could make a statement like: Solomonoff induction wins on computable universes for the usual reason, and it doesn’t lose too many bits on uncomputable universes in some circumstances because it does at least as well as something that has a uniform prior over Chaitin’s omega.
I guess there’s another question of how any of this makes sense if the universe is computable. We can still use information about which Turing machines halt in part of our generative model for a computable universe, even though “x doesn’t halt” is never actually observed.
Think more like an inductive human being than a deductive formal system: “x doesn’t ever halt” would require proof, but “x really seems not to halt for a very long time and seems to contain an infinitely-growing recursion there” gives us very valid reason to believe “x doesn’t halt”, and guides us in trying to assemble a proof.
(Likewise, as a side note, it can actually be quite difficult to prove “x does halt (on all inputs)”, because merely being observed to halt on many inputs is no guarantee at all. Unless the looping/recursion is trivially structural on the input, proving “x does halt” to a type system’s termination checker can be quite difficult!)
Perhaps you could make a statement like: Solomonoff induction wins on computable universes for the usual reason, and it doesn’t lose too many bits on uncomputable universes in some circumstances because it does at least as well as something that has a uniform prior over Chaitin’s omega.
I think that considering “computable” or “uncomputable” to be ontological properties of universes is… contrary to the philosophical viewpoint being taken here? At least from my point of view here, you’re not breaking an Ontological Law of the Platonic Plane of Maths if you “decide the undecidable”; you’re breaking a law of thermodynamics (namely, you’re trying to act as if you possessed infinite entropy with which to compute). It’s physically impossible, not Platonically impossible.
If I may invoke a phrase, I’m basically taking the point of view that a formal system is a map of some mathematical territory (which Turing machines halt), but that it is possible to have increasingly detailed and accurate maps (systems with stronger axioms) of the same territory (increasing the quantity of Turing machines whose behavior we can decide). The hard nut to crack is: how do we acquire better maps?
Further, the universe doesn’t necessarily have to be computable in any ontological sense. To talk about the physical, empirical universe, we only need to make predictions about events which take place a finite period of time into the future. If we’re composing programs whose termination we want to guarantee, we can use inductive and coinductive type systems to guarantee termination and stepwise termination where we need them, and add hard time limits where we can’t be sure.
As Schmidhuber put it regarding Goedel Machines: there may be very rare situations in which some part of reality is just Platonically, ontologically incomputable, but a human being is bound by the same laws of entropy as anything else, so the Platonic, ontological level of things doesn’t really matter for purposes of doing real-world math or AI. (We find it convenient to pretend it does matter, but actually we only care about the theorems we can demonstrate to be true among real-world human beings (which includes Goedel Statements for Peano Arithmetic) rather than about Platonic Forms of Pure Randomness.)
Think more like an inductive human being than a deductive formal system: “x doesn’t ever halt” would require proof, but “x really seems not to halt for a very long time and seems to contain an infinitely-growing recursion there” gives us very valid reason to believe “x doesn’t halt”, and guides us in trying to assemble a proof.
Busy beavers seem not to halt for a very long time and spend most of their time in a seemingly infinitely-growing recursion, and then they halt.
You could argue that busy beavers are pathological examples of little practical interest, but then many programs that perform useful computation may run for a very long time without entering easily identifiable “loops” before they halt. Unless you designed the program yourself and have a good understanding of what it is doing, and sometimes even if you designed the program yourself, proving termination may not be trivial.
Unless you designed the program yourself and have a good understanding of what it is doing, and sometimes even if you designed the program yourself, proving termination may not be trivial.
If it was a trivial problem it wouldn’t be worth tackling.
My point is that the observation that a program has been running for a long time and doesn’t have simple “while(true) {...}” loops doesn’t give us much information about its termination.
But I’m not sure what you are aiming at. For most practical purposes, a program that takes a very long time before producing a result is as good as a program that will never produce a result. Termination is important in theoretical computer science as the prototypical undecidable property used to prove other non-existence results, but if you are concerned about specific programs that you plan to actually run on physical machines, why are you interested in termination?
So, you can compress a list of observations about which Turing machines halt by starting with a uniform prior over Chaitin’s omega. This can lead to quite a lot of compression: the information of whether the first n Turing machines halt consists of n bits, but only requires log(n) bits of Chaitin’s omega. If we saw whether more Turing machines halted, we would also uncover more bits of Chaitin’s omega. Is this the kind of thing you are thinking of?
I guess there’s another question of how any of this makes sense if the universe is computable. We can still use information about which Turing machines halt in part of our generative model for a computable universe, even though “x doesn’t halt” is never actually observed.
Perhaps you could make a statement like: Solomonoff induction wins on computable universes for the usual reason, and it doesn’t lose too many bits on uncomputable universes in some circumstances because it does at least as well as something that has a uniform prior over Chaitin’s omega.
Think more like an inductive human being than a deductive formal system: “x doesn’t ever halt” would require proof, but “x really seems not to halt for a very long time and seems to contain an infinitely-growing recursion there” gives us very valid reason to believe “x doesn’t halt”, and guides us in trying to assemble a proof.
(Likewise, as a side note, it can actually be quite difficult to prove “x does halt (on all inputs)”, because merely being observed to halt on many inputs is no guarantee at all. Unless the looping/recursion is trivially structural on the input, proving “x does halt” to a type system’s termination checker can be quite difficult!)
I think that considering “computable” or “uncomputable” to be ontological properties of universes is… contrary to the philosophical viewpoint being taken here? At least from my point of view here, you’re not breaking an Ontological Law of the Platonic Plane of Maths if you “decide the undecidable”; you’re breaking a law of thermodynamics (namely, you’re trying to act as if you possessed infinite entropy with which to compute). It’s physically impossible, not Platonically impossible.
If I may invoke a phrase, I’m basically taking the point of view that a formal system is a map of some mathematical territory (which Turing machines halt), but that it is possible to have increasingly detailed and accurate maps (systems with stronger axioms) of the same territory (increasing the quantity of Turing machines whose behavior we can decide). The hard nut to crack is: how do we acquire better maps?
Further, the universe doesn’t necessarily have to be computable in any ontological sense. To talk about the physical, empirical universe, we only need to make predictions about events which take place a finite period of time into the future. If we’re composing programs whose termination we want to guarantee, we can use inductive and coinductive type systems to guarantee termination and stepwise termination where we need them, and add hard time limits where we can’t be sure.
As Schmidhuber put it regarding Goedel Machines: there may be very rare situations in which some part of reality is just Platonically, ontologically incomputable, but a human being is bound by the same laws of entropy as anything else, so the Platonic, ontological level of things doesn’t really matter for purposes of doing real-world math or AI. (We find it convenient to pretend it does matter, but actually we only care about the theorems we can demonstrate to be true among real-world human beings (which includes Goedel Statements for Peano Arithmetic) rather than about Platonic Forms of Pure Randomness.)
Busy beavers seem not to halt for a very long time and spend most of their time in a seemingly infinitely-growing recursion, and then they halt.
You could argue that busy beavers are pathological examples of little practical interest, but then many programs that perform useful computation may run for a very long time without entering easily identifiable “loops” before they halt.
Unless you designed the program yourself and have a good understanding of what it is doing, and sometimes even if you designed the program yourself, proving termination may not be trivial.
If it was a trivial problem it wouldn’t be worth tackling.
My point is that the observation that a program has been running for a long time and doesn’t have simple “while(true) {...}” loops doesn’t give us much information about its termination.
But I’m not sure what you are aiming at.
For most practical purposes, a program that takes a very long time before producing a result is as good as a program that will never produce a result. Termination is important in theoretical computer science as the prototypical undecidable property used to prove other non-existence results, but if you are concerned about specific programs that you plan to actually run on physical machines, why are you interested in termination?