I have set out to fully and intuitely understand Löb’s theorem.
I have found an answer that makes sense to my intuition. Not sure if it will come across, as I am a very non-neurotypical person who really thinks like a space alien sometimes.
Löb’s theorem, the standard/explicit formulation (for reference)
(for any formula P) If it is provable in PA that “if P is provable in PA then P is true”, then P is provable in PA.
Löb’s theorem, intuitive expression, version 2.0
If I am a provably logically coherent entity and I promise that “if I promise that you can trust me, then you can trust me”, then you can trust me.
I am tentatively calling the concept “meta-trust”. With that in mind, an even shorter formulation:
If you meta-trust me, then you trust me.
I’m not sure I can explain in explicit terms why my intuition feels this is true. I mean, it just seems obvious? But I can’t explain yet why it is obvious.
And the correspondence between the two versions of the theorem is not 1:1. It uses somewhat different formulation/[type of concept]. It uses different kind of cognitive machinery. Even if it feels “obvious” to me that the statements are exactly equivalent.
Well, not exactly, because intuition never thinks in 100% exact terms. But it can think very exactly, just not infinitely so.
So, my question to you is:
Does this formulation make intuitive sense to you? There are probably inference steps I have skipped. What are the inference steps that need to expanded on, to make the formulation more broadly understandable, without relying on some unspoken background knowledge? And most importantly: if you this think that this formulation is logically incorrect, I would very much like to hear how it is incorrect.
What do you mean by “its outputs are the same as its conclusions”? If I had to guess I would translate it as “PA proves the same things as are true in every model of PA”. Is that right?
I have set out to fully and intuitely understand Löb’s theorem.
I have found an answer that makes sense to my intuition. Not sure if it will come across, as I am a very non-neurotypical person who really thinks like a space alien sometimes.
Löb’s theorem, the standard/explicit formulation (for reference)
(for any formula P)
If it is provable in PA that “if P is provable in PA then P is true”, then P is provable in PA.
Löb’s theorem, intuitive expression, version 2.0
If I am a provably logically coherent entity
and I promise that “if I promise that you can trust me, then you can trust me”, then you can trust me.
I am tentatively calling the concept “meta-trust”. With that in mind, an even shorter formulation:
If you meta-trust me, then you trust me.
I’m not sure I can explain in explicit terms why my intuition feels this is true. I mean, it just seems obvious? But I can’t explain yet why it is obvious.
And the correspondence between the two versions of the theorem is not 1:1. It uses somewhat different formulation/[type of concept]. It uses different kind of cognitive machinery. Even if it feels “obvious” to me that the statements are exactly equivalent.
Well, not exactly, because intuition never thinks in 100% exact terms. But it can think very exactly, just not infinitely so.
So, my question to you is:
Does this formulation make intuitive sense to you?
There are probably inference steps I have skipped. What are the inference steps that need to expanded on, to make the formulation more broadly understandable, without relying on some unspoken background knowledge?
And most importantly: if you this think that this formulation is logically incorrect, I would very much like to hear how it is incorrect.
Thank you for reading!
What does “logically coherent” mean?
Good question!
I’m thinking sometime like, why do we trust PA?
It is because it is:
Logically consistent
Never lies (i. e. its outputs are the same as its conclusions)
And I’m thinking something like that, only in a person.
Well, it does’t have to be a person, but my intuition prefers to think in terms of personhood.
“It is provable in PA that X is true” translates to “I am a logically coherent entity and I am telling you that X is true”
What do you mean by “its outputs are the same as its conclusions”? If I had to guess I would translate it as “PA proves the same things as are true in every model of PA”. Is that right?
No I just literally mean it cannot lie.
A human can hold one model of the world in his mind, but say something that fits with a different model of the world. A lie.
PA does not do that.
So I’m making it clear that I’m talking about entities like PA which do not (or cannot) lie.