Let’s say the agent proves two statements implying different consequences of the same action, such as
[agent()==2 ⇒ world2()==1000] and [agent()==2 ⇒ world2()==2000]. Then, it can also prove that (agent()==2) implies a contradiction, which normally can’t happen
Why not? (X implies a contradiction) implies (NOT X), it’s a standard reductio-ad-absurdum proof technique (= derived rule of propositional calculus). Or do I miss something here?
It does mean that [agent()==2] is false, but that allows proving statements like [agent()==2 ⇒ world2()==U] for any value of U, which breaks our algorithm. If we use Goedel diagonal (chicken) rule of performing an action whenever it’s proven that this action won’t be performed, that guarantees that an action can never be proven to be impossible, unless the reasoning system is compromised.
Why not? (X implies a contradiction) implies (NOT X), it’s a standard reductio-ad-absurdum proof technique (= derived rule of propositional calculus). Or do I miss something here?
It does mean that [agent()==2] is false, but that allows proving statements like [agent()==2 ⇒ world2()==U] for any value of U, which breaks our algorithm. If we use Goedel diagonal (chicken) rule of performing an action whenever it’s proven that this action won’t be performed, that guarantees that an action can never be proven to be impossible, unless the reasoning system is compromised.