Does it seem clear to you that if you model a human as a somewhat complicated thermostat (perhaps making decisions according to some kind of flowchart) then you aren’t going to predict that a human would write a post about humans being somewhat complicated thermostats?
Is my flowchart model complicated enough to emulate a RNN? Then I’m not sure.
Or one might imagine a model that has psychological parts, but distributes the function fulfilled by “wants” in an agent model among several different pieces, which might conflict or reinforce each other depending on context. This model could reproduce human verbal behavior about “wanting” with no actual component in the model that formalizes wanting.
If this kind of model works well, it’s a counterexample (less compute-intensive than a microphysical model) of the idea I think you’re gesturing towards, which is that the data really privileges models in which there’s an agent-like formalization of wanting.
Or one might imagine a model that has psychological parts, but distributes the function fulfilled by “wants” in an agent model among several different pieces, which might conflict or reinforce each other depending on context.
Hmm, so with enough compute (like, using parts of your brain to model the different psychological parts), perhaps you could do something like this for yourself. But you couldn’t predict the results of the behavior of people smarter than you. For example, you would have a hard time predicting that Kasparov would win a chess game against a random chess player, without being as good at chess as Kasparov yourself, though even with the intentional stance you can’t predict his actions. (You could obviously predict this using statistics, but that wouldn’t be based on just the mechanical model itself)
That is, it seems like the intentional stance often involves using much less compute than the person being modeled in order to predict that things will go in the direction of the person’s wants (limited by the person’s capabilities), without predicting each of the person’s actions.
Is my flowchart model complicated enough to emulate a RNN? Then I’m not sure.
Or one might imagine a model that has psychological parts, but distributes the function fulfilled by “wants” in an agent model among several different pieces, which might conflict or reinforce each other depending on context. This model could reproduce human verbal behavior about “wanting” with no actual component in the model that formalizes wanting.
If this kind of model works well, it’s a counterexample (less compute-intensive than a microphysical model) of the idea I think you’re gesturing towards, which is that the data really privileges models in which there’s an agent-like formalization of wanting.
Hmm, so with enough compute (like, using parts of your brain to model the different psychological parts), perhaps you could do something like this for yourself. But you couldn’t predict the results of the behavior of people smarter than you. For example, you would have a hard time predicting that Kasparov would win a chess game against a random chess player, without being as good at chess as Kasparov yourself, though even with the intentional stance you can’t predict his actions. (You could obviously predict this using statistics, but that wouldn’t be based on just the mechanical model itself)
That is, it seems like the intentional stance often involves using much less compute than the person being modeled in order to predict that things will go in the direction of the person’s wants (limited by the person’s capabilities), without predicting each of the person’s actions.