A quick sketch on how the Curry-Howard Isomorphism kinda appears to connect Algorithmic Information Theory with ordinal logics
The following is sorta-kinda carried on from a recent comments thread, where I was basically saying I wasn’t gonna yack about what I’m thinking until I spent the time to fully formalized it. Well, Luke got interested in it, and I spewed the entire sketch and intuition to him, and he asked me to put it up where others can participate. So the following is it.
Basically, Algorithmic Information Theory as started by Solomonoff and Kolmogorov, and then continued by Chaitin, contains a theorem called Chaitin’s Incompleteness Theorem, which says (in short, colloquial terms) “you can’t prove a 20kg theorem with 10kg of axioms”. Except it says this in fairly precise mathematical terms, all of which are based in the undecidability of the Halting Problem. To possess “more kilograms” of axioms is mathematically equivalent to being able to computationally decide the halting behavior of “more kilograms” of Turing Machines, or to be able to compress strings to smaller sizes.
Now consider the Curry-Howard Isomorphism, which says that logical systems as computation machines and logical systems as mathematical logics are, in certain precise ways, the same thing. Now consider ordinal logic as started in Turing’s PhD thesis, which starts with ordinary first-order logic and extends it with axioms saying “First-order logic is consistent”, “First-order logic extended with the previous axiom is consistent”, all the way up to the limiting countable infinity Omega (and then, I believe but haven’t checked, further into the transfinite ordinals).
In a search problem with partial information, as you gain more information you’re closing in on a smaller and smaller portion of your search space. Thus, Turing’s ordinal logics don’t violate Goedel’s Second Incompleteness Theorem: they specify more axioms, and therefore specify a smaller “search space” of models that are, up to any finite ordinal level, standard models of first-order arithmetic (and therefore genuinely consistent up to precisely that finite ordinal level). Goedel’s Completeness Theorem says that theorems of a first-order theory/language are provable iff they are true in every model of that first-order theory/language. The clearest, least mystical, presentation of Goedel’s First Incompleteness Theorem is: nonstandard models of first-order arithmetic exist, in which Goedel Sentences are false. The corresponding statement of Goedel’s Second Incompleteness Theorem follows: nonstandard models of first-order arithmetic, which are inconsistent, exist. To capture only the consistent standard models of first-order arithmetic, you need to specify the additional axiom “First-order arithmetic is consistent”, and so on up the ordinal hierarchy.
Back to learning and AIT! Your artificial agent, let us say, starts with a program 10kg large. Through learning, it acquires, let us say, 10kg of empirical knowledge, giving it 20kg of “mass” in total. Depending on how precisely we can characterize the bound involved in Chaitin’s Incompleteness Theorem (he just said, “there exists a constant L which is a function of the 10kg”, more or less), we would then have an agent whose empirical knowledge enables it to reason about a 12kg agent. It can’t reason about the 12kg agent plus the remaining 8kg of empirical knowledge, because that would be 20kg and it’s only a 20kg agent now even with its strongest empirical data, but it can formally prove universally-quantified theorems about how the 12kg agent will behave as an agent (ie: its goal functions, the soundness of its reasoning under empirical data, etc.). So it can then “trust” the 12kg agent, hand its 10kg of empirical data over, and shut itself down, and then “come back online” as the new 12kg agent and learn from the remaining 8kg of data, thus being a smarter, self-improved agent. The hope is that the 12kg agent, possessing a stronger mathematical theory, can generalize more quickly from its sensory data, thus enabling it to accumulate empirical knowledge more quickly and generalize more precisely than its predecessor, thus speeding it through the process of compressing all available information provided by its environment and achieving the reasoning power of something like a Solomonoff Inducer (ie: which has a Turing Oracle to give accurate Kolmogorov complexity numbers).
This is the sketch and the intuition. As a theory, it does one piece of very convenient work: it explains why we can’t solve the Halting Problem in general (we do not possess correct formal systems of infinite size with which to reason about halting), but also explains precisely why we appear to be able to solve it in so many of the cases we “care about” (namely: we are reasoning about programs small enough that our theories are strong enough to decide their halting behavior—and we discover new formal axioms to describe our environment).
So yeah. I really have to go now. Mathematical input and criticism is very welcomed; the inevitable questions to clear things up for people feeling confusion about what’s going on will be answered eventually.
- 29 Jun 2015 23:10 UTC; 0 points) 's comment on [Link] Self-Representation in Girard’s System U by (
This doesn’t make sense. A theory is inconsistent if and only if it has no models. I don’t know what you mean by an “inconsistent model” here.
Actually, it stops at omega+1! Except there’s not a unique way of doing omega+1, it depends on how exactly you encoded the omega. (Note: This is not something I have actually taken the time to understand beyond what’s written there at all.)
It straightforwardly means that I don’t know very much model theory and interpreted my limited reading in an incorrect way. It appears that I mean a nonstandard model of Peano Arithmetic which is syntactically consistent but which fails to be omega-consistent. Only omega-consistent logics maintain an exact isomorphism between their Goedel Numberings of theorems and actual theorems, AFAIK.
I think the OP means a model of the assertion that arithmetic is inconsistent. It still seems weirdly written, though. (E.g,I don’t know how to interpret, “up to any finite ordinal level, standard models of first-order arithmetic”.)
We also appear not to be able to solve the halting problem for many small programs we care about, or at least we haven’t been able to solve it yet.
The simplest example is probably the Collatz conjecture, but in general any mathematical conjecture can be reformulated as determining whether some small program that enumerates and checks all candidate proofs halts.
I am not sure I understand this line of argument. I think you are saying that, while no Turing machine can solve the halting problem in general, that there is a Turing machine of a length no greater than F(n) (where F is an as-of-yet unspecified function) that can solve the halting problem for any Turing machine whose length ⇐ n.
If so, it seems to me that this observation is trivially true; there are only ~ 2^n Turing machines of a length <=n (assuming a two-symbol alphabet), so there must be a Turing machine with a length proportional to 2^n can solve the halting problem for Turing machines of length ⇐ n, simply via table lookup.
While trivially true, this does not really solve the halting problem, because:
The halting problem requires a general solution for arbitrarily large Turing machines, and
While it is clear that a Turing machine exists that can solve the halting problem for Turing machines of length no greater than n, there is no general way to find that Turing machine for an arbitrary value of n.
How does it do so via table look-up? Without running the machines, how can it form such a table?
If I was claiming to be able to solve the Halting Problem in general circumstances, as in solve the Halting Problem, I would be speaking nonsense. Glad we’re on the same page.
Which requires a formal theory of countably infinite Kolmogorov complexity (ie: infinite expected thermodynamic entropy), yes.
My understanding of Chaitin’s Theorem is that the F(n) (which he calls L) places an upper bound on the size of strings that can be proven to be algorithmically random. I have been hoping to find the time to explore the proof of this theorem in detail, so as to hopefully yield a construction telling us exactly how large a halting problem (equivalently: a string randomness problem) a given theory can solve.
As to where we obtain such a theory, that requires, in my view, a more thorough exploration of AIT—and, potentially, ordinal logic—to give a useful answer. My current maximum-probability estimate is that any sufficiently large consistent theory capable of talking about compressing strings/machines that halt can do the job; I believe this based on Chaitin appearing to talk about such in his own description of his proofs (and its making intuitive sense, given the universality of any one formulation of Turing-complete no-oracle computation).
The core idea is that every theory can solve some size of halting problems, so what we actually have to do is keep learning a larger theory until we find that the halting problems of interest become solvable. AIT and the use of real-life compressors as upper bounds to Kolmogorov complexity should yield ways to apply the theory to real programs—sometimes.
It doesn’t have to form the table; a Turing machine of a length on the order of 2^n can have the answer to the question “does this Turing machine halt” built-in (hard coded) for all 2^n Turing machines of length n or smaller. Such a Turing machine exists for any value of n, but finding this Turing machine is not possible for an arbitrary value of n. Functions on a finite domain are essentially all Turing computable, so the fact that a Turing machine exists that can solve the halting problem for all Turing machines of length n or less is not surprising.
My point is that the fact that a theory exists that can solve some size n of halting problem is trivially true (if by theory you mean more-less the same thing as Turing machine) but this fact is also useless, since there is no way to discover that theory (Turing machine) for an arbitrary value of n.
Oh, you mean classical existence. Yes, it is classically trivial that there exists some Turing machine blah blah blah, simply by the fact that classically, a Turing machine must either halt in finite time or never halt. However, for purposes of computability theory and algorithmic information theory, such an existence statement is useless.
My apologies, but are you making the statement just to be extra platonist about these things?
And I am speaking in a constructive sense of existence, intending to say that we can in fact discover such theories: by learning our way up the ordinal hierarchy, so to speak.
That is my point—stating that a function that maps a finite domain onto {0,1} is computable is trivially true and (as you said) useless. I was not really trying to be extra Platonist; I was just trying to point out that these sorts of finite mapping functions are not really interesting from a computability theory standpoint.
If you can formalize how you will discover such theories (i.e. how the learning our way up the ordinal hierarchy part will work), I’ll be interested in seeing what you come up with.
That’s the tough bit.
I’d like to ask a not-too-closely related question, if you don’t mind.
A Curry-Howard analogue of Gödel’s Second Incompleteness Theorem is the statement “no total language can embed its own interpreter”; see the classic post by Conor McBride. But Conor also says (and it’s quite obviously true), that total languages can still embed their coinductive interpreters, for example one that returns in the partiality monad.
So, my question is: what is the logical interpretation of a coinductive self-interpreter? I feel I’m not well-versed enough in mathematical logic for this. I imagine it would have a type like “forall (A : ’Type) → ‘Term A → Partiality (Interp A)”, where ‘Type and ’Term denote types and terms in the object language and “Interp” returns a type in the meta-language. But what does “Partiality” mean logically, and is it anyhow related to the Löbstacle?
The logical interpretation of a coinductive self-interpreter is something that says, roughly, “If you keep feeding me computing power, someday I might self-interpret!”. And indeed, if you eventually feed it enough compute-power, it will self-interpret. You just wouldn’t be able to prove ahead of time that it does so.
“Partiality” in PL terms means “the type contains \Bot, the Bottom element”. The Bottom element is the type-theoretic element for representing nontermination.
If you can construct \Bot-the-element, you can prove \Bot-the-type, which is also called False, and implies all other propositions and types by the Principle of Explosion. The Second Incompleteness Theorem roughly says:
“In order to self-verify, you must prove that your proof procedure implements the logic you believe in an infinite number of cases. Normally you could try to do that by induction, but since you only have a finite-sized axiom scheme to go on, while your language is Turing-complete, you will never have enough information to verify every possible proof of every possible theorem via inductive application of your axioms, so you’ll have to check some ‘by hand.’ That will take infinite time, so you’ll never finish. Or, if you presume that you did finish, it meant you proved \Bot and thus obtained all other propositions by Explosion.”
LoL!
One part I’m not clear on is how the empirical knowledge works. The equivalent of “kilograms of mass” might be something like bits of Chaitin’s omega. If you have n bits of Chaitin’s omega, you can solve the halting problem for any Turing machine of length up to n. But, while you can get lower bounds on Chaitin’s omega by running Turing machines and seeing which halt, you can’t actually learn upper bounds on Chaitin’s omega except by observing uncomputable processes (for example, a halting oracle confirming that some Turing machine doesn’t halt). So unless your empirical knowledge is coming from an uncomputable source, you shouldn’t expect to gain any more bits of Chaitin’s omega.
In general, if we could recursively enumerate all non-halting Turing machines, then we could decide whether M halts by running M in parallel with a process that enumerates non-halting machines until finding M. If M halts, then we eventually find that it halts; if it doesn’t halt, then we eventually find that it doesn’t halt. So this recursive enumeration will give us an algorithm for the halting problem. I’m trying to understand how the things you’re saying could give us more powerful theories from empirical data without allowing us to recursively enumerate all non-halting Turing machines.
That is the domain of things which I am referring to, yes.
This is where things get fuzzier. I think that by routing through logic and observing the empirical consequences of ordinal consistency axioms, we could learn to believe those axioms and thus eventually climb the ordinal hierarchy to gain knowledge of the halting behavior of increasingly many Turing machines. However, my original idea was much plainer: just do a kind of Solmonoff Induction over Turing machines encoding axiom sets.
Even empirical data only allows us to enumerate a finite number of nonhalting Turing machines, without being able to decide the halting/nonhalting behavior of Turing machines too large for our empirical information.
So, you can compress a list of observations about which Turing machines halt by starting with a uniform prior over Chaitin’s omega. This can lead to quite a lot of compression: the information of whether the first n Turing machines halt consists of n bits, but only requires log(n) bits of Chaitin’s omega. If we saw whether more Turing machines halted, we would also uncover more bits of Chaitin’s omega. Is this the kind of thing you are thinking of?
I guess there’s another question of how any of this makes sense if the universe is computable. We can still use information about which Turing machines halt in part of our generative model for a computable universe, even though “x doesn’t halt” is never actually observed.
Perhaps you could make a statement like: Solomonoff induction wins on computable universes for the usual reason, and it doesn’t lose too many bits on uncomputable universes in some circumstances because it does at least as well as something that has a uniform prior over Chaitin’s omega.
Think more like an inductive human being than a deductive formal system: “x doesn’t ever halt” would require proof, but “x really seems not to halt for a very long time and seems to contain an infinitely-growing recursion there” gives us very valid reason to believe “x doesn’t halt”, and guides us in trying to assemble a proof.
(Likewise, as a side note, it can actually be quite difficult to prove “x does halt (on all inputs)”, because merely being observed to halt on many inputs is no guarantee at all. Unless the looping/recursion is trivially structural on the input, proving “x does halt” to a type system’s termination checker can be quite difficult!)
I think that considering “computable” or “uncomputable” to be ontological properties of universes is… contrary to the philosophical viewpoint being taken here? At least from my point of view here, you’re not breaking an Ontological Law of the Platonic Plane of Maths if you “decide the undecidable”; you’re breaking a law of thermodynamics (namely, you’re trying to act as if you possessed infinite entropy with which to compute). It’s physically impossible, not Platonically impossible.
If I may invoke a phrase, I’m basically taking the point of view that a formal system is a map of some mathematical territory (which Turing machines halt), but that it is possible to have increasingly detailed and accurate maps (systems with stronger axioms) of the same territory (increasing the quantity of Turing machines whose behavior we can decide). The hard nut to crack is: how do we acquire better maps?
Further, the universe doesn’t necessarily have to be computable in any ontological sense. To talk about the physical, empirical universe, we only need to make predictions about events which take place a finite period of time into the future. If we’re composing programs whose termination we want to guarantee, we can use inductive and coinductive type systems to guarantee termination and stepwise termination where we need them, and add hard time limits where we can’t be sure.
As Schmidhuber put it regarding Goedel Machines: there may be very rare situations in which some part of reality is just Platonically, ontologically incomputable, but a human being is bound by the same laws of entropy as anything else, so the Platonic, ontological level of things doesn’t really matter for purposes of doing real-world math or AI. (We find it convenient to pretend it does matter, but actually we only care about the theorems we can demonstrate to be true among real-world human beings (which includes Goedel Statements for Peano Arithmetic) rather than about Platonic Forms of Pure Randomness.)
Busy beavers seem not to halt for a very long time and spend most of their time in a seemingly infinitely-growing recursion, and then they halt.
You could argue that busy beavers are pathological examples of little practical interest, but then many programs that perform useful computation may run for a very long time without entering easily identifiable “loops” before they halt.
Unless you designed the program yourself and have a good understanding of what it is doing, and sometimes even if you designed the program yourself, proving termination may not be trivial.
If it was a trivial problem it wouldn’t be worth tackling.
My point is that the observation that a program has been running for a long time and doesn’t have simple “while(true) {...}” loops doesn’t give us much information about its termination.
But I’m not sure what you are aiming at.
For most practical purposes, a program that takes a very long time before producing a result is as good as a program that will never produce a result. Termination is important in theoretical computer science as the prototypical undecidable property used to prove other non-existence results, but if you are concerned about specific programs that you plan to actually run on physical machines, why are you interested in termination?
I’m not sure I understand what you mean here.
Assume we have a formal system and we are using it to predict observations. In classical logic, if the formal system is inconsistent, then ex falso quodlibet, therefore all possible observations will be allowed by it.
I think the grandparent silently assumes we’ve solved logical uncertainty, and can use this to decide an axiom does not produce a contradiction by observing the lack of explosion. Though “routing through logic” seems like an incredibly bad way to say that.
Wouldn’t that imply having solved undecidability?
I spoke imprecisely, or at least the OP argues that I did. It assumes we have a progressive solution to undecidability/ logical uncertainty that can take us pretty far, eventually, but does not allow self-knowledge at any time.
No. We don’t decide an axiom does not produce a contradiction by observing the lack of explosion. We just raise the probability of noncontradiction by observing the lack of explosion.
EDIT: Axiom, not acronym. Holy God I need more sleep.
-_-. The post itself was a sketch, not a finished, formalized publication. Please refrain from being anal about phrasings or imagining that I’ve done far more math than I really have. If I want to be precise, I’ll just use symbols—that’s what they’re for.
I think this process is pointless: if the 10kg+10kg agent can reason and prove faithfully things about the 12kg agent, it means that it’s already more powerful than the 12kg agent, and so there’s no reason to upgrade to heavier agents.
Indeed, if the 10kg agent already believes the empirical knowledge enough to trust derivations made with it, it means that it has already upgraded to a 20kg agent, only the upgrade phase was hidden under the sentence ‘through learning’.
You can already see this in Bayesian updating: if you have prior knowledge constraining a parameter to an interval 1⁄10 long, and you acquire empirical data that ends up constraining the parameter to 1⁄20, you are exactly in the same spot that you would have been if you had started from a prior knowledge constraining the interval to 1⁄20.
Within the metaphor, that is true, but the metaphor of mass is very leaky here. The original 10kg agent, no matter how much empirical data it learns, can only learn at a certain rate (at least, if it works like current AI designs), because it only possesses so-and-so good a compression algorithm. It is limited in the speed with which it can learn from data and in its ability to generalize, and to prove things as logical theorems rather than believe them as empirically supported.
Hence why it wants to build the 12kg agent in the first place: the “larger” agent can compress data more efficiently, thus requiring a lower sample complexity to learn further correct conclusions (on top of the existing 8kg of empirical data).
I can think of other ways for AI to work (and am pursuing those, which includes building increasingly sophisticated little Prisoners’ Dilemma bots that achieve reliable cooperation in the one-shot game), but even those, if we wanted them to self-improve, would eventually have to reason about smaller versions of themselves, which leaves us right back at algorithmic information theory, trying to avoid the paradox theorems that arise when you reason about something about as complex as yourself.