This agent is not a very effective optimization process. It would rather falsely believe that it has achieved its goals than actually achieve its goals.
If it’s an AI, and it has the predicate ‘goal(foo(bar))‘, and the semantics of its knowledge representation are that the presence of ‘foo(bar)’ in its knowledge base means “I believe foo(bar)” (which is the usual way of doing it), then anything that writes ‘foo(bar)’ into its knowledge base achieves its goals.
The typical AI representational system has no way to distinguish a true fact from a believed fact. There’s no reason to make such a distinction; it would be misleading.
You’re going astray when you say,
Suppose the agent has the opportunity (option A) to arrange to falsely believe the universe is in a state that is worth utility uFA but this action really leads to a different state worth utility uTA,
A rational agent can’t detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.
If it’s an AI, and it has the predicate ‘goal(foo(bar))‘, and the semantics of its knowledge representation are that the presence of ‘foo(bar)’ in its knowledge base means “I believe foo(bar)” (which is the usual way of doing it), then anything that writes ‘foo(bar)’ into its knowledge base achieves its goals.
Nope. One shouldn’t conclude from Theorem(I’ll answer “42”) that the answer should be “42″. There is a difference between believing you believe something, and believing it. Believing something is enough to believe you believe it, but not conversely. Only from outside the system can you make that step, looking at the system and pointing out that if it believes something, and it really did do everything correctly, then it must be true.
I am speaking of simple, straightforward, representational semantics of a logic, and the answer I gave is correct. You are talking about humans, and making a sophisticated philosophical argument, and trying to map it onto logic by analogy. Which is more reliable?
I don’t mean that your comments are wrong; but you’re talking about people, and I’m talking about computer programs. What I said about computer programs is correct about computer programs.
As was indicated by the link, I’m talking about Loeb’s theorem; the informal discussion about what people (or formal agents) should believe is merely one application/illustration of that idea.
The typical AI representational system has no way to distinguish a true fact from a believed fact.
What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.
A rational agent can’t detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.
No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.
What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.
It’s very important to be able to specify who believes a proposition. But I don’t see how the AI can compute that it is going to believe a proposition, without believing that proposition. (We’re not talking about propositions that the AI doesn’t currently believe because the preconditions aren’t yet satisfied; we’re talking about an AI that is able to predict that it’s going to be fooled into believing something false.)
A rational agent can’t detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.
No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.
Please give an example in which an AI can both infer that A leads to utility uTA, and that the AI will believe it has led to uFA, that does not involve the AI detecting errors in its own reasoning and not correcting them.
If it’s an AI, and it has the predicate ‘goal(foo(bar))‘, and the semantics of its knowledge representation are that the presence of ‘foo(bar)’ in its knowledge base means “I believe foo(bar)” (which is the usual way of doing it), then anything that writes ‘foo(bar)’ into its knowledge base achieves its goals.
The typical AI representational system has no way to distinguish a true fact from a believed fact. There’s no reason to make such a distinction; it would be misleading.
You’re going astray when you say,
A rational agent can’t detect the existence of option A. It would have to both infer that A leads to utility uTA, and at the same time infer that it leads to uFA.
Nope. One shouldn’t conclude from Theorem(I’ll answer “42”) that the answer should be “42″. There is a difference between believing you believe something, and believing it. Believing something is enough to believe you believe it, but not conversely. Only from outside the system can you make that step, looking at the system and pointing out that if it believes something, and it really did do everything correctly, then it must be true.
I am speaking of simple, straightforward, representational semantics of a logic, and the answer I gave is correct. You are talking about humans, and making a sophisticated philosophical argument, and trying to map it onto logic by analogy. Which is more reliable?
I don’t mean that your comments are wrong; but you’re talking about people, and I’m talking about computer programs. What I said about computer programs is correct about computer programs.
As was indicated by the link, I’m talking about Loeb’s theorem; the informal discussion about what people (or formal agents) should believe is merely one application/illustration of that idea.
What I am arguing it should do is distinguish between believing a proposition and believing that some other AI believes a proposition, especially in the case where the other AI is its future self.
No. It would have to infer that A leads to utility uTA and that it leads to the AI in the future believing it has led to uFA.
It’s very important to be able to specify who believes a proposition. But I don’t see how the AI can compute that it is going to believe a proposition, without believing that proposition. (We’re not talking about propositions that the AI doesn’t currently believe because the preconditions aren’t yet satisfied; we’re talking about an AI that is able to predict that it’s going to be fooled into believing something false.)
Please give an example in which an AI can both infer that A leads to utility uTA, and that the AI will believe it has led to uFA, that does not involve the AI detecting errors in its own reasoning and not correcting them.