The typical AI representational system has no way to distinguish a true fact from a believed fact.
What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.
A rational agent can’t detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.
No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.
What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.
It’s very important to be able to specify who believes a proposition. But I don’t see how the AI can compute that it is going to believe a proposition, without believing that proposition. (We’re not talking about propositions that the AI doesn’t currently believe because the preconditions aren’t yet satisfied; we’re talking about an AI that is able to predict that it’s going to be fooled into believing something false.)
A rational agent can’t detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.
No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.
Please give an example in which an AI can both infer that A leads to utility uTA, and that the AI will believe it has led to uFA, that does not involve the AI detecting errors in its own reasoning and not correcting them.
What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.
No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.
It’s very important to be able to specify who believes a proposition. But I don’t see how the AI can compute that it is going to believe a proposition, without believing that proposition. (We’re not talking about propositions that the AI doesn’t currently believe because the preconditions aren’t yet satisfied; we’re talking about an AI that is able to predict that it’s going to be fooled into believing something false.)
Please give an example in which an AI can both infer that A leads to utility uTA, and that the AI will believe it has led to uFA, that does not involve the AI detecting errors in its own reasoning and not correcting them.