I think we can more easily and generally justify the use of the intentional stance. Intentionality requires only the existence of some process (a subject) that can be said to regard things (objects). We can get this in any system that accepts input and interprets that input to generate a signal that distinguishes between object and not object (or for continuous “objects”, more or less object).
For example, almost any sensor in a circuit makes the system intentional. Wire together a thermometer and a light that turns on when the temperature is over 0 degrees, off when below, and we have a system that is intentional about freezing temperatures.
Such a cybernetic argument, to me at least, is more appealing because it gets down to base reality immediately and avoid the need to sort out things people often want to lump in with intentionality, like consciousness.
I think this gets deflationary if you think about it, though. Yes, you can apply the intentional stance to the thermostat, but (almost) nobody’s going to get confused and start thinking the thermometer has more fancy abilities like long-term planning just because you say “it wants to keep the room at the right temperature.” Even though you’re using a single word “w.a.n.t.” for both humans and thermostats, you don’t get them mixed up, because your actual representation of what’s going on still distinguishes them based on context. There’s not just one intentional stance, there’s an stance for thermostats and another for humans, and they make different predictions about behavior, even if they’re similar enough that you can call them both intentional stances.
If you buy this, then suddenly applying an intentional stance to LLMs buys you a lot less predictive power, because even intentional stances have a ton of little variables in the mental model they come with, which we will naturally fill in as we learn a stance that works well for LLMs.
I think we can more easily and generally justify the use of the intentional stance. Intentionality requires only the existence of some process (a subject) that can be said to regard things (objects). We can get this in any system that accepts input and interprets that input to generate a signal that distinguishes between object and not object (or for continuous “objects”, more or less object).
For example, almost any sensor in a circuit makes the system intentional. Wire together a thermometer and a light that turns on when the temperature is over 0 degrees, off when below, and we have a system that is intentional about freezing temperatures.
Such a cybernetic argument, to me at least, is more appealing because it gets down to base reality immediately and avoid the need to sort out things people often want to lump in with intentionality, like consciousness.
I think this gets deflationary if you think about it, though. Yes, you can apply the intentional stance to the thermostat, but (almost) nobody’s going to get confused and start thinking the thermometer has more fancy abilities like long-term planning just because you say “it wants to keep the room at the right temperature.” Even though you’re using a single word “w.a.n.t.” for both humans and thermostats, you don’t get them mixed up, because your actual representation of what’s going on still distinguishes them based on context. There’s not just one intentional stance, there’s an stance for thermostats and another for humans, and they make different predictions about behavior, even if they’re similar enough that you can call them both intentional stances.
If you buy this, then suddenly applying an intentional stance to LLMs buys you a lot less predictive power, because even intentional stances have a ton of little variables in the mental model they come with, which we will naturally fill in as we learn a stance that works well for LLMs.