our agent treats proofs checked inside the boundaries of its own head different from proofs checked somewhere in the environment
Why is that a problem? Except in the same sense that gravity is a problem when you want to lift something.
It’s just a fact that people and machines make mistakes. The ideal approach to results proved by other beings is to treat them as less than certain. Indeed, an ideal reasoner might want to extend this principle to its own thoughts too, if only a sensible way to do this could be found.
If you want an ideal reasoner to treat conclusions it has derived itself, and conclusions derived by another reasoner which is probably also ideal, with the same degree of credence, then the ideal approach is for the first reasoner to doubt its own conclusions in the way that it doubts the conclusions of the other reasoner.
Throughout, I have meant “ideal” from a perspective of epistemic caution, where you value not believing wrong things. If you’re in a hurry, or really do trust certain other reasoners, then of course you can simply design a reasoner to treat the output of certain other reasoners as apriori valid.
The “tiling agents” issue mostly wasn’t relevant to this article, but that is the goal of getting around Lob’s Theorem. If and only if you can get around Lob’s Theorem and prove things regarding formal systems as complex as yourself, then and only then can you construct a reasoning system more powerful than yourself and set it to work on your own task.
Otherwise, you can’t prove that an agent with a higher-order logic than yourself will function according to your goals or, put another way, retain your beliefs (ie: I don’t want to replace myself with an agent who will reason that the sky is green when I’m quite sure it’s blue).
Why is that a problem? Except in the same sense that gravity is a problem when you want to lift something.
It’s just a fact that people and machines make mistakes. The ideal approach to results proved by other beings is to treat them as less than certain. Indeed, an ideal reasoner might want to extend this principle to its own thoughts too, if only a sensible way to do this could be found.
If you want an ideal reasoner to treat conclusions it has derived itself, and conclusions derived by another reasoner which is probably also ideal, with the same degree of credence, then the ideal approach is for the first reasoner to doubt its own conclusions in the way that it doubts the conclusions of the other reasoner.
Throughout, I have meant “ideal” from a perspective of epistemic caution, where you value not believing wrong things. If you’re in a hurry, or really do trust certain other reasoners, then of course you can simply design a reasoner to treat the output of certain other reasoners as apriori valid.
The “tiling agents” issue mostly wasn’t relevant to this article, but that is the goal of getting around Lob’s Theorem. If and only if you can get around Lob’s Theorem and prove things regarding formal systems as complex as yourself, then and only then can you construct a reasoning system more powerful than yourself and set it to work on your own task.
Otherwise, you can’t prove that an agent with a higher-order logic than yourself will function according to your goals or, put another way, retain your beliefs (ie: I don’t want to replace myself with an agent who will reason that the sky is green when I’m quite sure it’s blue).