The Gödel incompleteness theorems are notorious for being surprising. I mean, they did put a hard stop to Hilbert’s programme of completely, thoroughly formalizing mathematics.
And most laypeople find them very difficult to wrap their heads around.
It is, however, my belief that nothing should ever be unintuitive. That intuition, like conscious beliefs, can be trained. The territory is never weird, but your brain certainly can be.
My goal is not to give an intuitive explanation of the actual proof, nor even the actual statements, but to give you intuitive evidence that the statements make sense. The core idea is actually from E. T. Jaynes’ Probability Theory, I’m only fleshing it out.
First, let’s look at the statement of the second theorem, as cited on Wikipedia:
Assume F is a consistent formalized system which contains elementary arithmetic. Then F⊬Cons(F)
That is, F does not prove its own consistency.
Let’s unwrap some of the definitions.
By “formalized system”, we mean a bunch of statements that you could (assuming the Church-Turing thesis) fully enumerate, either with a computer, or, with a bit of patience, pencil and paper. Mostly you have a bunch of starting axioms and a few rules to produce more statements from them, which means you can just apply all of them in turn to produce all of the statements.
And so, suppose we have a system—no, a guy, standing in front of you, who is currently telling you that he is not, in fact, insane. Perhaps he tells you that he believes in the latest conspiracy theory surrounding Ghislaine Maxwell and Wayfair, and that you must also believe in it, since he trusts his own epistemic process (he believes that he is not, in fact, insane).
Naturally, you would be skeptical. An insane person can very well not realize that they are insane. Perhaps the guy has a habit of believing in absolutely everything, even contradictory things (he’s a pretty inconsistent dude), in which case of course he would in particular believe that he’s sane, though if you press him he might also tell you that he’s lost his marbles (perhaps by digging too deep into a certain paranormal imageboard).
Well, now you understand Gödel’s second incompleteness theorem. Since an inconsistent formal system would prove its own consistency anyway, the fact that F⊢Cons(F) would not be evidence that F is in fact consistent.
EDIT: I realize that this is not, directly, evidence that F cannot in fact prove its own consistency. However, this argument should fully dissolve the question, as it then becomes completely meaningless for F to say it believes in itself. Still, if Fis sufficiently self-aware, it should recognize that a model of itself should not be trusted either, and so it cannot prove its own consistency.
Now, let’s look at the statement of the first theorem, again as cited on Wikipedia:
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.
This is rather simple, now that we have the second incompleteness theorem.
First off, since system F can perform arithmetic, it can do anything a computer can do, or a mathematician with pencil, paper, and too much time on their hands (an infinite amount, in principle). And since we assumed that F can be worked out by a computer, it can work itself out.
You know what else is capable of self-reference? You, dear reader.
Well, suppose then that you want to be sane, that you want to trust your epistemic process. Then, naturally you can’t believe that you’re insane. And yet you can’t believe that you’re sane either (who would believe you after they’ve learned about the second incompleteness theorem? Have some modesty!), for if you trust yourself too much, you risk becoming complacent. And thus you are doomed to forever doubt whether or not you are sane.
And therefore, you can neither believe nor disbelieve in your sanity.
For system F, obviously it can’t disprove its own consistency, since we do expect its model of itself to be accurate. And yet we just saw above that it can’t prove its own consistency either, and so the statement of its own consistency is internally undecidable.
The general strategy above has been to cross the subject-object distinction. The “subject” bit is the part of your brain which deals with people, and the “object” bit the part which deals with objects and machines.
People often (claim to) use only one of them depending on the task. But why not make them join forces?
We find it amusing when mathematical objects are framed as being people—however, when people are framed as being machines, it is generally perceived to be absolutely horrifying. I say, let yourself be more often both amused and horrified.
Gödel Incompleteness: For Dummies
The Gödel incompleteness theorems are notorious for being surprising. I mean, they did put a hard stop to Hilbert’s programme of completely, thoroughly formalizing mathematics.
And most laypeople find them very difficult to wrap their heads around.
It is, however, my belief that nothing should ever be unintuitive. That intuition, like conscious beliefs, can be trained. The territory is never weird, but your brain certainly can be.
My goal is not to give an intuitive explanation of the actual proof, nor even the actual statements, but to give you intuitive evidence that the statements make sense. The core idea is actually from E. T. Jaynes’ Probability Theory, I’m only fleshing it out.
First, let’s look at the statement of the second theorem, as cited on Wikipedia:
That is, F does not prove its own consistency.
Let’s unwrap some of the definitions.
By “formalized system”, we mean a bunch of statements that you could (assuming the Church-Turing thesis) fully enumerate, either with a computer, or, with a bit of patience, pencil and paper. Mostly you have a bunch of starting axioms and a few rules to produce more statements from them, which means you can just apply all of them in turn to produce all of the statements.
By a “consistent” system, we mean one that’s not completely insane. It doesn’t believe anything contradictory. This has wider implications. In fact, if your system proves just one contradiction, it actually proves everything! Why? Because if you once tell a lie, the truth is ever after your enemy.
And so, suppose we have a system—no, a guy, standing in front of you, who is currently telling you that he is not, in fact, insane. Perhaps he tells you that he believes in the latest conspiracy theory surrounding Ghislaine Maxwell and Wayfair, and that you must also believe in it, since he trusts his own epistemic process (he believes that he is not, in fact, insane).
Naturally, you would be skeptical. An insane person can very well not realize that they are insane. Perhaps the guy has a habit of believing in absolutely everything, even contradictory things (he’s a pretty inconsistent dude), in which case of course he would in particular believe that he’s sane, though if you press him he might also tell you that he’s lost his marbles (perhaps by digging too deep into a certain paranormal imageboard).
Well, now you understand Gödel’s second incompleteness theorem. Since an inconsistent formal system would prove its own consistency anyway, the fact that F⊢Cons(F) would not be evidence that F is in fact consistent.
EDIT: I realize that this is not, directly, evidence that F cannot in fact prove its own consistency. However, this argument should fully dissolve the question, as it then becomes completely meaningless for F to say it believes in itself. Still, if F is sufficiently self-aware, it should recognize that a model of itself should not be trusted either, and so it cannot prove its own consistency.
Now, let’s look at the statement of the first theorem, again as cited on Wikipedia:
This is rather simple, now that we have the second incompleteness theorem.
First off, since system F can perform arithmetic, it can do anything a computer can do, or a mathematician with pencil, paper, and too much time on their hands (an infinite amount, in principle). And since we assumed that F can be worked out by a computer, it can work itself out.
You know what else is capable of self-reference? You, dear reader.
Well, suppose then that you want to be sane, that you want to trust your epistemic process. Then, naturally you can’t believe that you’re insane. And yet you can’t believe that you’re sane either (who would believe you after they’ve learned about the second incompleteness theorem? Have some modesty!), for if you trust yourself too much, you risk becoming complacent. And thus you are doomed to forever doubt whether or not you are sane.
And therefore, you can neither believe nor disbelieve in your sanity.
For system F, obviously it can’t disprove its own consistency, since we do expect its model of itself to be accurate. And yet we just saw above that it can’t prove its own consistency either, and so the statement of its own consistency is internally undecidable.
The general strategy above has been to cross the subject-object distinction. The “subject” bit is the part of your brain which deals with people, and the “object” bit the part which deals with objects and machines.
People often (claim to) use only one of them depending on the task. But why not make them join forces?
We find it amusing when mathematical objects are framed as being people—however, when people are framed as being machines, it is generally perceived to be absolutely horrifying. I say, let yourself be more often both amused and horrified.