I think I’m missing something with the Löb’s theorem example.
If [A()=5⇒U()=5]∧[A()=10⇒U()=0] can be proved under the theorem, then can’t [A()=10⇒U()=10]∧[A()=5⇒U()=0] also be proved? What’s the cause of the asymmetry that privileges taking $5 in all scenarios where you’re allowed to search for proofs for a long time?
Agreed. The asymmetry needs to come from the source code for the agent.
In the simple version I gave, the asymmetry comes from the fact that the agent checks for a proof that x>y before checking for a proof that y>x. If this was reversed, then as you said, the Lobian reasoning would make the agent take the 10, instead of the 5.
In a less simple version, this could be implicit in the proof search procedure. For example, the agent could wait for any proof of the conclusion x>y or y>x, and make a decision based on whichever happened first. Then there would not be an obvious asymmetry. Yet, the proof search has to go in some order. So the agent design will introduce an asymmetry in one direction or the other. And when building theorem provers, you’re not usually thinking about what influence the proof order might have on which theorems are actually true; you usually think of the proofs as this static thing which you’re searching through. So it would be easy to mistakenly use a theorem prover which just so happens to favor 5 over 10 in the proof search.
I think I’m missing something with the Löb’s theorem example.
If [A()=5⇒U()=5]∧[A()=10⇒U()=0] can be proved under the theorem, then can’t [A()=10⇒U()=10]∧[A()=5⇒U()=0] also be proved? What’s the cause of the asymmetry that privileges taking $5 in all scenarios where you’re allowed to search for proofs for a long time?
Agreed. The asymmetry needs to come from the source code for the agent.
In the simple version I gave, the asymmetry comes from the fact that the agent checks for a proof that x>y before checking for a proof that y>x. If this was reversed, then as you said, the Lobian reasoning would make the agent take the 10, instead of the 5.
In a less simple version, this could be implicit in the proof search procedure. For example, the agent could wait for any proof of the conclusion x>y or y>x, and make a decision based on whichever happened first. Then there would not be an obvious asymmetry. Yet, the proof search has to go in some order. So the agent design will introduce an asymmetry in one direction or the other. And when building theorem provers, you’re not usually thinking about what influence the proof order might have on which theorems are actually true; you usually think of the proofs as this static thing which you’re searching through. So it would be easy to mistakenly use a theorem prover which just so happens to favor 5 over 10 in the proof search.