It’s likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
I think...this might actually be a possible mechanism behind really dumb computer users. I’ll have to keep it in mind when dealing with them in future.
Comparing to Achmiz above:
Most people don’t have mental models.
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all.
Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.
A while ago I was reading a book about mental models.
It investigates how people deal with the question: “You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?”
The “correct answer” is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.
A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don’t contract.
If you simply tell such a person the correct solution they won’t remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.
As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.
People black out information that doesn’t fit into their models of the world. This can lead to some interesting social engieering results.
A lot of magic tricks are based on faulty mental models by the audience.
This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
Literally not having a model about something would require knowing literally nothing about it
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there.
Whether or not the user is conscious of the feature of his model doesn’t matter much.
and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?
I think...this might actually be a possible mechanism behind really dumb computer users. I’ll have to keep it in mind when dealing with them in future.
Comparing to Achmiz above:
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.
Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.
A while ago I was reading a book about mental models. It investigates how people deal with the question: “You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?”
The “correct answer” is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.
A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don’t contract.
If you simply tell such a person the correct solution they won’t remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.
As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.
People black out information that doesn’t fit into their models of the world. This can lead to some interesting social engieering results.
A lot of magic tricks are based on faulty mental models by the audience.
Which book was that? Would you recommend it in general?
This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
Simulations are models. They allow us to make predictions about how something behaves.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there. Whether or not the user is conscious of the feature of his model doesn’t matter much.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?