If it’s an AI, and it has the predicate ‘goal(foo(bar))‘, and the semantics of its knowledge representation are that the presence of ‘foo(bar)’ in its knowledge base means “I believe foo(bar)” (which is the usual way of doing it), then anything that writes ‘foo(bar)’ into its knowledge base achieves its goals.
Nope. One shouldn’t conclude from Theorem(I’ll answer “42”) that the answer should be “42″. There is a difference between believing you believe something, and believing it. Believing something is enough to believe you believe it, but not conversely. Only from outside the system can you make that step, looking at the system and pointing out that if it believes something, and it really did do everything correctly, then it must be true.
I am speaking of simple, straightforward, representational semantics of a logic, and the answer I gave is correct. You are talking about humans, and making a sophisticated philosophical argument, and trying to map it onto logic by analogy. Which is more reliable?
I don’t mean that your comments are wrong; but you’re talking about people, and I’m talking about computer programs. What I said about computer programs is correct about computer programs.
As was indicated by the link, I’m talking about Loeb’s theorem; the informal discussion about what people (or formal agents) should believe is merely one application/illustration of that idea.
Nope. One shouldn’t conclude from Theorem(I’ll answer “42”) that the answer should be “42″. There is a difference between believing you believe something, and believing it. Believing something is enough to believe you believe it, but not conversely. Only from outside the system can you make that step, looking at the system and pointing out that if it believes something, and it really did do everything correctly, then it must be true.
I am speaking of simple, straightforward, representational semantics of a logic, and the answer I gave is correct. You are talking about humans, and making a sophisticated philosophical argument, and trying to map it onto logic by analogy. Which is more reliable?
I don’t mean that your comments are wrong; but you’re talking about people, and I’m talking about computer programs. What I said about computer programs is correct about computer programs.
As was indicated by the link, I’m talking about Loeb’s theorem; the informal discussion about what people (or formal agents) should believe is merely one application/illustration of that idea.