One approach: Think of two terms or ideas that are similar but want distinguishing. “How is a foo different from a bar?” For instance, if you’re looking to learn about data structures in Python, you might ask, “How is a dictionary different from a list?”
You can learn if your thought that they are similar is accurate, too: “How is a list different from a for loop?” might get some insightful discussion … if you’re lucky.
I think that would get me to talk with them out of sheer curiosity. (“Just what kind of a mental model could this person to have in order to ask such a question?”)
Sadly, reacting in such a way generally amounts to grossly overestimating the questioner’s intelligence and informedness. Most people don’t have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to
“I have absolutely no idea what’s going on. Here’s something that sounds like a question, but understand that I probably won’t even remotely comprehend any answer you give me. If you want me to understand anything about this, at all, you’ll have to go way back to the beginning and take it real slow.”
(Source: years of working in computer retail and tech support.)
Even “it’s a mysterious black box that might work right if I keep smashing the buttons at random” is a model, just a poor and confused one. Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.
This might sound like I’m just being pedantic, but it’s also that I find “most people are stupid and have literally no mental models of computers” to be a harmful idea in many ways—it equates a “model” with a clear explicit model while entirely ignoring vague implicit models (that most of human thought probably consists of), it implies that anyone who doesn’t have a store of specialized knowledge is stupid, and it ignores the value of experts familiarizing themselves with various folk models (e.g. folk models of security) that people hold about the domain.
Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.
Even someone who has know knowledge about computer will use a mental model if he has to interact with a computer.
It’s likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
If people don’t have any mental model in which to fit information they will ignore the information.
It’s likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
I think...this might actually be a possible mechanism behind really dumb computer users. I’ll have to keep it in mind when dealing with them in future.
Comparing to Achmiz above:
Most people don’t have mental models.
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all.
Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.
A while ago I was reading a book about mental models.
It investigates how people deal with the question: “You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?”
The “correct answer” is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.
A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don’t contract.
If you simply tell such a person the correct solution they won’t remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.
As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.
People black out information that doesn’t fit into their models of the world. This can lead to some interesting social engieering results.
A lot of magic tricks are based on faulty mental models by the audience.
This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
Literally not having a model about something would require knowing literally nothing about it
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there.
Whether or not the user is conscious of the feature of his model doesn’t matter much.
and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?
Sadly, reacting in such a way generally amounts to grossly overestimating the questioner’s intelligence and informedness. Most people don’t have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to
Most people do have mental models in the sense the word get’s defined in decision theory literature.
One approach: Think of two terms or ideas that are similar but want distinguishing. “How is a foo different from a bar?” For instance, if you’re looking to learn about data structures in Python, you might ask, “How is a dictionary different from a list?”
You can learn if your thought that they are similar is accurate, too: “How is a list different from a for loop?” might get some insightful discussion … if you’re lucky.
Of course, if you know sufficiently little about the subject matter, you might instead end up asking a question like
“How is a browser different from a hard drive?”
which, instead, discourages the expert from speaking with you (and makes them think that you’re an idiot).
I think that would get me to talk with them out of sheer curiosity. (“Just what kind of a mental model could this person to have in order to ask such a question?”)
Sadly, reacting in such a way generally amounts to grossly overestimating the questioner’s intelligence and informedness. Most people don’t have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to
“I have absolutely no idea what’s going on. Here’s something that sounds like a question, but understand that I probably won’t even remotely comprehend any answer you give me. If you want me to understand anything about this, at all, you’ll have to go way back to the beginning and take it real slow.”
(Source: years of working in computer retail and tech support.)
Even “it’s a mysterious black box that might work right if I keep smashing the buttons at random” is a model, just a poor and confused one. Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.
This might sound like I’m just being pedantic, but it’s also that I find “most people are stupid and have literally no mental models of computers” to be a harmful idea in many ways—it equates a “model” with a clear explicit model while entirely ignoring vague implicit models (that most of human thought probably consists of), it implies that anyone who doesn’t have a store of specialized knowledge is stupid, and it ignores the value of experts familiarizing themselves with various folk models (e.g. folk models of security) that people hold about the domain.
Even someone who has know knowledge about computer will use a mental model if he has to interact with a computer. It’s likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
If people don’t have any mental model in which to fit information they will ignore the information.
I think...this might actually be a possible mechanism behind really dumb computer users. I’ll have to keep it in mind when dealing with them in future.
Comparing to Achmiz above:
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.
Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.
A while ago I was reading a book about mental models. It investigates how people deal with the question: “You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?”
The “correct answer” is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.
A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don’t contract.
If you simply tell such a person the correct solution they won’t remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.
As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.
People black out information that doesn’t fit into their models of the world. This can lead to some interesting social engieering results.
A lot of magic tricks are based on faulty mental models by the audience.
Which book was that? Would you recommend it in general?
This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
Simulations are models. They allow us to make predictions about how something behaves.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there. Whether or not the user is conscious of the feature of his model doesn’t matter much.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?
I’m pretty sure this is correct.
Thanks, that’s a good point.
Fair enough. Pedantry accepted. :) I especially agree with the importance of recognizing vague implicit “folk models”.
However:
Most such people are. (Actually, most people are, period.)
Believe you me, most people who ask questions like the one I quote are stupid.
Most people do have mental models in the sense the word get’s defined in decision theory literature.