This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
Literally not having a model about something would require knowing literally nothing about it
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there.
Whether or not the user is conscious of the feature of his model doesn’t matter much.
and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?
This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
Simulations are models. They allow us to make predictions about how something behaves.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there. Whether or not the user is conscious of the feature of his model doesn’t matter much.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?