And the assertion here is that with strategy #2 I could also predict that if I asked Steve why he did that, he would say “because I saw a red blinking light this morning, which was a message from God that I haven’t been doing enough charity,” but that my underlying model would nevertheless not include anything that corresponds to Steve’s belief that red blinking lights are messages from God, merely an algorithm that happens to make those predictions in other ways.
So.. when we posit in this discussion a system that lacks a theory of mind in a sense that matters, are we positing a system that cannot make predictions like this one? I assume so, given what you just said, but I want to confirm.
Yes, I’d say so. It isn’t helpful here to say that a system lacks a theory of mind if it has a mechanism that allows it to make predictions about reported beliefs, intentions, etc.
Cool! This was precisely my concern. It sounded an awful lot like y’all were talking about a system that could make such predictions but somehow lacked a theory of mind. Thanks for clarifying.
And the assertion here is that with strategy #2 I could also predict that if I asked Steve why he did that, he would say “because I saw a red blinking light this morning, which was a message from God that I haven’t been doing enough charity,” but that my underlying model would nevertheless not include anything that corresponds to Steve’s belief that red blinking lights are messages from God, merely an algorithm that happens to make those predictions in other ways.
Yes?
Yes, that’s possible. It’s still possible that you could get a lot done with strategy #2 without being able to make that prediction.
I agree that if 2 systems have the same inputs and outputs, their internals don’t matter much here.
So.. when we posit in this discussion a system that lacks a theory of mind in a sense that matters, are we positing a system that cannot make predictions like this one? I assume so, given what you just said, but I want to confirm.
Yes, I’d say so. It isn’t helpful here to say that a system lacks a theory of mind if it has a mechanism that allows it to make predictions about reported beliefs, intentions, etc.
Cool! This was precisely my concern. It sounded an awful lot like y’all were talking about a system that could make such predictions but somehow lacked a theory of mind. Thanks for clarifying.