What is it that we would actually be disagreeing about, other than what English phrase to use to describe the system’s underlying model(s)?
We would be disagreeing about the form of the system’s underlying models.
2 different strategies to consider:
I know that Steve believes that red blinking lights before 9 AM are a message from God that he has not been doing enough charity, so I can predict that he will give more money to charity if I show him a blinking light before 9 AM.
Steve seeing a red blinking light before 9 AM has historically resulted in a 20% increase of charitable donation for that day, so I can predict that he will give more money to charity if I show him a blinking light before 9 AM.
You can model humans with or without referring to their mental states. Both kinds of models are useful, depending on circumstance.
And the assertion here is that with strategy #2 I could also predict that if I asked Steve why he did that, he would say “because I saw a red blinking light this morning, which was a message from God that I haven’t been doing enough charity,” but that my underlying model would nevertheless not include anything that corresponds to Steve’s belief that red blinking lights are messages from God, merely an algorithm that happens to make those predictions in other ways.
So.. when we posit in this discussion a system that lacks a theory of mind in a sense that matters, are we positing a system that cannot make predictions like this one? I assume so, given what you just said, but I want to confirm.
Yes, I’d say so. It isn’t helpful here to say that a system lacks a theory of mind if it has a mechanism that allows it to make predictions about reported beliefs, intentions, etc.
Cool! This was precisely my concern. It sounded an awful lot like y’all were talking about a system that could make such predictions but somehow lacked a theory of mind. Thanks for clarifying.
We would be disagreeing about the form of the system’s underlying models.
2 different strategies to consider:
I know that Steve believes that red blinking lights before 9 AM are a message from God that he has not been doing enough charity, so I can predict that he will give more money to charity if I show him a blinking light before 9 AM.
Steve seeing a red blinking light before 9 AM has historically resulted in a 20% increase of charitable donation for that day, so I can predict that he will give more money to charity if I show him a blinking light before 9 AM.
You can model humans with or without referring to their mental states. Both kinds of models are useful, depending on circumstance.
And the assertion here is that with strategy #2 I could also predict that if I asked Steve why he did that, he would say “because I saw a red blinking light this morning, which was a message from God that I haven’t been doing enough charity,” but that my underlying model would nevertheless not include anything that corresponds to Steve’s belief that red blinking lights are messages from God, merely an algorithm that happens to make those predictions in other ways.
Yes?
Yes, that’s possible. It’s still possible that you could get a lot done with strategy #2 without being able to make that prediction.
I agree that if 2 systems have the same inputs and outputs, their internals don’t matter much here.
So.. when we posit in this discussion a system that lacks a theory of mind in a sense that matters, are we positing a system that cannot make predictions like this one? I assume so, given what you just said, but I want to confirm.
Yes, I’d say so. It isn’t helpful here to say that a system lacks a theory of mind if it has a mechanism that allows it to make predictions about reported beliefs, intentions, etc.
Cool! This was precisely my concern. It sounded an awful lot like y’all were talking about a system that could make such predictions but somehow lacked a theory of mind. Thanks for clarifying.