Or one might imagine a model that has psychological parts, but distributes the function fulfilled by “wants” in an agent model among several different pieces, which might conflict or reinforce each other depending on context.
Hmm, so with enough compute (like, using parts of your brain to model the different psychological parts), perhaps you could do something like this for yourself. But you couldn’t predict the results of the behavior of people smarter than you. For example, you would have a hard time predicting that Kasparov would win a chess game against a random chess player, without being as good at chess as Kasparov yourself, though even with the intentional stance you can’t predict his actions. (You could obviously predict this using statistics, but that wouldn’t be based on just the mechanical model itself)
That is, it seems like the intentional stance often involves using much less compute than the person being modeled in order to predict that things will go in the direction of the person’s wants (limited by the person’s capabilities), without predicting each of the person’s actions.
Hmm, so with enough compute (like, using parts of your brain to model the different psychological parts), perhaps you could do something like this for yourself. But you couldn’t predict the results of the behavior of people smarter than you. For example, you would have a hard time predicting that Kasparov would win a chess game against a random chess player, without being as good at chess as Kasparov yourself, though even with the intentional stance you can’t predict his actions. (You could obviously predict this using statistics, but that wouldn’t be based on just the mechanical model itself)
That is, it seems like the intentional stance often involves using much less compute than the person being modeled in order to predict that things will go in the direction of the person’s wants (limited by the person’s capabilities), without predicting each of the person’s actions.