Er I don’t think this is right. Lob’s theorem says that an agent cannot trust future copies of itself, unless those future copies use strictly weaker axioms in their reasoning system.
The “can” has now been changed into “cannot”. D’oh!
Er I don’t think this is right. Lob’s theorem says that an agent cannot trust future copies of itself, unless those future copies use strictly weaker axioms in their reasoning system.
The “can” has now been changed into “cannot”. D’oh!