When I’m in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don’t occur to me. How do you ask the right questions?
People want to ask me about legal issues all the time. The best way to get a useful answer is to describe your current situation, the cause of your current situation, and what you want to change. Thus:
I have severe injuries, caused by that other person hitting me with their car. I want that person’s driver’s license taken away.
Then I can say something like: Your desired remedy is not available for REASONS, but instead, you could get REMEDY. Here are the facts and analysis that would affect whether REMEDY is available.
In short, try to define the problem. fubarobfusco has some good advice about how to refine your articulation of a problem. That said, if you have reason to believe a person knows something useful, you probably already know enough to articulate your question.
The point of my formulation is to avoid assumptions that distort the analysis. Suppose someone in the situation I described above said “I was maliciously and negligently injured by that person’s driving. I want them in prison.” At that point, my response needs to detangle a lot of confusions before I can say anything useful.
I see you beat me to it. Yes, define your problem and goals.
The really bad thing about asking questions is that people will answer them. You ask some expert “How do I do X with Y?”. He’ll tell you. He’ll likely wonder what the hell you’re up to in doing such a strange thing with Y, but he’ll answer. If he knew what your problem and goals were instead, he’d ask the right questions of himself on how to solve the problem, instead of the wonrg question that you gave him.
You ask some expert “How do I do X with Y?”. He’ll tell you. He’ll likely wonder what the hell you’re up to in doing such a strange thing with Y, but he’ll answer.
Also in the event you get an unusually helpful expert, he might point this out. Consider this your lucky day and feel free to ask follow up questions. Don’t be discouraged by the pointing out being phrased along the lines of “What kind of idiot would want to do X with Y?”
My advice is geared towards factual questions, so I’m not sure how helpful it would be for more pure intellectual questions. The most important point I was trying to make was that you should be careful not to pre-bake too much analysis into your question.
Thus, asking “what should I do now to get a high paying job to donate lots of money to charity?” is different from “what should I do now to make the most positive impact on the world?”
Many folks around here will give very similar answers to both of those questions (I probably wouldn’t, but that’s not important to this conversation). But the first question rules out answers like “go get a CompSci PhD and help invent FAI” or “go to medical school and join Doctors without Borders.”
In short, people will answer the question you ask, or the one they think you mean to ask. That’s not necessarily the same as giving you the information they have that you would find most helpful.
Don’t ask questions. Describe your problem and goal, and ask them to tell you what would be helpful. If they know more than you, let them figure out the questions you should ask, and then tell you the answers.
I don’t think an answer has to be specific to be useful. Often just understanding how an expert in a certain area thinks about the world can be useful even if you have no specificity.
When it comes to questions:
1) What was the greatest discovery in your field in the last 5 years?
2) Is there an insight in your field that obvious to everyone in your field but that most people in society just don’t get?
I never know how to ask questions that will inspire useful, specific answers. They just don’t occur to me. How do you ask the right questions?
My favorite question comes from The Golden Compass:
If you were me, what question would you ask of the Consul of the Witches?
I haven’t employed it against people yet, though, and so a better way to approach the issue in the same spirit is to describe your situation (as suggested by many others).
I find “How do I proceed to find out more about X” to give best results. Note: it’s important to phrase it so that they understand you are asking for an efficient algorithm to find out about X, not for them to tell you about X!
It works even if you’re completely green and talking to a prodigy in the field (which I find to be particularly hard). Otherwise you’ll get “RTFM”/”JFGI” at best or they will avoid you entirely at worst.
One approach: Think of two terms or ideas that are similar but want distinguishing. “How is a foo different from a bar?” For instance, if you’re looking to learn about data structures in Python, you might ask, “How is a dictionary different from a list?”
You can learn if your thought that they are similar is accurate, too: “How is a list different from a for loop?” might get some insightful discussion … if you’re lucky.
I think that would get me to talk with them out of sheer curiosity. (“Just what kind of a mental model could this person to have in order to ask such a question?”)
Sadly, reacting in such a way generally amounts to grossly overestimating the questioner’s intelligence and informedness. Most people don’t have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to
“I have absolutely no idea what’s going on. Here’s something that sounds like a question, but understand that I probably won’t even remotely comprehend any answer you give me. If you want me to understand anything about this, at all, you’ll have to go way back to the beginning and take it real slow.”
(Source: years of working in computer retail and tech support.)
Even “it’s a mysterious black box that might work right if I keep smashing the buttons at random” is a model, just a poor and confused one. Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.
This might sound like I’m just being pedantic, but it’s also that I find “most people are stupid and have literally no mental models of computers” to be a harmful idea in many ways—it equates a “model” with a clear explicit model while entirely ignoring vague implicit models (that most of human thought probably consists of), it implies that anyone who doesn’t have a store of specialized knowledge is stupid, and it ignores the value of experts familiarizing themselves with various folk models (e.g. folk models of security) that people hold about the domain.
Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.
Even someone who has know knowledge about computer will use a mental model if he has to interact with a computer.
It’s likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
If people don’t have any mental model in which to fit information they will ignore the information.
It’s likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
I think...this might actually be a possible mechanism behind really dumb computer users. I’ll have to keep it in mind when dealing with them in future.
Comparing to Achmiz above:
Most people don’t have mental models.
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all.
Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.
A while ago I was reading a book about mental models.
It investigates how people deal with the question: “You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?”
The “correct answer” is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.
A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don’t contract.
If you simply tell such a person the correct solution they won’t remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.
As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.
People black out information that doesn’t fit into their models of the world. This can lead to some interesting social engieering results.
A lot of magic tricks are based on faulty mental models by the audience.
This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
Literally not having a model about something would require knowing literally nothing about it
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there.
Whether or not the user is conscious of the feature of his model doesn’t matter much.
and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?
Sadly, reacting in such a way generally amounts to grossly overestimating the questioner’s intelligence and informedness. Most people don’t have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to
Most people do have mental models in the sense the word get’s defined in decision theory literature.
But if you don’t have a problem to begin with—if your aim is “learn more in field X,” it gets more complicated. Given that you don’t know what questions are worth asking, the best question might be “where would I go to learn more about X” or “what learning material would you recommend on the subject of X?” Then in the process of following and learning from their pointer, generate questions to ask at a later date.
There may be an inherent contradiction between wanting nonspecific knowledge and getting useful, specific answers.
Start by asking the wrong ones. For me, it took a while to notice when I had even a stupid question to ask (possibly some combination of mild social anxiety and generally wanting to come across as smart & well-informed had stifled this impulse), so this might take a little bit of practice.
Sometimes your interlocutor will answer your suboptimal questions, and that will give you time to think of what you really want to know, and possibly a few extra hints for figuring it out. But at least as often your interlocutor will take your interest as a cue that they can just go ahead and tell you nonrelated things about the subject at hand.
Ask the smartest questions you can think of at the time and keep updating, but don’t waste time on that. After you have done a bit of this, ask them what you are missing, what questions you should be asking them.
When I’m in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don’t occur to me. How do you ask the right questions?
Lawyer’s perspective:
People want to ask me about legal issues all the time. The best way to get a useful answer is to describe your current situation, the cause of your current situation, and what you want to change. Thus:
Then I can say something like: Your desired remedy is not available for REASONS, but instead, you could get REMEDY. Here are the facts and analysis that would affect whether REMEDY is available.
In short, try to define the problem. fubarobfusco has some good advice about how to refine your articulation of a problem. That said, if you have reason to believe a person knows something useful, you probably already know enough to articulate your question.
The point of my formulation is to avoid assumptions that distort the analysis. Suppose someone in the situation I described above said “I was maliciously and negligently injured by that person’s driving. I want them in prison.” At that point, my response needs to detangle a lot of confusions before I can say anything useful.
I see you beat me to it. Yes, define your problem and goals.
The really bad thing about asking questions is that people will answer them. You ask some expert “How do I do X with Y?”. He’ll tell you. He’ll likely wonder what the hell you’re up to in doing such a strange thing with Y, but he’ll answer. If he knew what your problem and goals were instead, he’d ask the right questions of himself on how to solve the problem, instead of the wonrg question that you gave him.
Also in the event you get an unusually helpful expert, he might point this out. Consider this your lucky day and feel free to ask follow up questions. Don’t be discouraged by the pointing out being phrased along the lines of “What kind of idiot would want to do X with Y?”
That’s helpful. Do you think it works as a general strategy? For example, academic discussions:
Or should the question/what I want to change be more specific?
My advice is geared towards factual questions, so I’m not sure how helpful it would be for more pure intellectual questions. The most important point I was trying to make was that you should be careful not to pre-bake too much analysis into your question.
Thus, asking “what should I do now to get a high paying job to donate lots of money to charity?” is different from “what should I do now to make the most positive impact on the world?”
Many folks around here will give very similar answers to both of those questions (I probably wouldn’t, but that’s not important to this conversation). But the first question rules out answers like “go get a CompSci PhD and help invent FAI” or “go to medical school and join Doctors without Borders.”
In short, people will answer the question you ask, or the one they think you mean to ask. That’s not necessarily the same as giving you the information they have that you would find most helpful.
Don’t ask questions. Describe your problem and goal, and ask them to tell you what would be helpful. If they know more than you, let them figure out the questions you should ask, and then tell you the answers.
I don’t think an answer has to be specific to be useful. Often just understanding how an expert in a certain area thinks about the world can be useful even if you have no specificity.
When it comes to questions: 1) What was the greatest discovery in your field in the last 5 years? 2) Is there an insight in your field that obvious to everyone in your field but that most people in society just don’t get?
My favorite question comes from The Golden Compass:
I haven’t employed it against people yet, though, and so a better way to approach the issue in the same spirit is to describe your situation (as suggested by many others).
I find “How do I proceed to find out more about X” to give best results. Note: it’s important to phrase it so that they understand you are asking for an efficient algorithm to find out about X, not for them to tell you about X!
It works even if you’re completely green and talking to a prodigy in the field (which I find to be particularly hard). Otherwise you’ll get “RTFM”/”JFGI” at best or they will avoid you entirely at worst.
One approach: Think of two terms or ideas that are similar but want distinguishing. “How is a foo different from a bar?” For instance, if you’re looking to learn about data structures in Python, you might ask, “How is a dictionary different from a list?”
You can learn if your thought that they are similar is accurate, too: “How is a list different from a for loop?” might get some insightful discussion … if you’re lucky.
Of course, if you know sufficiently little about the subject matter, you might instead end up asking a question like
“How is a browser different from a hard drive?”
which, instead, discourages the expert from speaking with you (and makes them think that you’re an idiot).
I think that would get me to talk with them out of sheer curiosity. (“Just what kind of a mental model could this person to have in order to ask such a question?”)
Sadly, reacting in such a way generally amounts to grossly overestimating the questioner’s intelligence and informedness. Most people don’t have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to
“I have absolutely no idea what’s going on. Here’s something that sounds like a question, but understand that I probably won’t even remotely comprehend any answer you give me. If you want me to understand anything about this, at all, you’ll have to go way back to the beginning and take it real slow.”
(Source: years of working in computer retail and tech support.)
Even “it’s a mysterious black box that might work right if I keep smashing the buttons at random” is a model, just a poor and confused one. Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.
This might sound like I’m just being pedantic, but it’s also that I find “most people are stupid and have literally no mental models of computers” to be a harmful idea in many ways—it equates a “model” with a clear explicit model while entirely ignoring vague implicit models (that most of human thought probably consists of), it implies that anyone who doesn’t have a store of specialized knowledge is stupid, and it ignores the value of experts familiarizing themselves with various folk models (e.g. folk models of security) that people hold about the domain.
Even someone who has know knowledge about computer will use a mental model if he has to interact with a computer. It’s likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
If people don’t have any mental model in which to fit information they will ignore the information.
I think...this might actually be a possible mechanism behind really dumb computer users. I’ll have to keep it in mind when dealing with them in future.
Comparing to Achmiz above:
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.
Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.
A while ago I was reading a book about mental models. It investigates how people deal with the question: “You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?”
The “correct answer” is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.
A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don’t contract.
If you simply tell such a person the correct solution they won’t remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.
As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.
People black out information that doesn’t fit into their models of the world. This can lead to some interesting social engieering results.
A lot of magic tricks are based on faulty mental models by the audience.
Which book was that? Would you recommend it in general?
This reminds me of the debate in philosophy of mind between the “simulation theory” and the “theory theory” of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I’m wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by “simulating” other people “in hardware”, as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation… but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
Simulations are models. They allow us to make predictions about how something behaves.
The “simulation” in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation’s settings or its structure.
A black box that takes input and produces predictive output while being totally impenetrable is not a “model” in any useful sense of the word.
The concepts of mental models is very popular in usability design.
It’s quite useful to distinguish a websites features from the features that the model of the website that the user has in it’s head.
If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that’s there. Whether or not the user is conscious of the feature of his model doesn’t matter much.
How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I’ve learned how they think.) Doesn’t that unavoidably imply the existence of some kinds of models?
I’m pretty sure this is correct.
Thanks, that’s a good point.
Fair enough. Pedantry accepted. :) I especially agree with the importance of recognizing vague implicit “folk models”.
However:
Most such people are. (Actually, most people are, period.)
Believe you me, most people who ask questions like the one I quote are stupid.
Most people do have mental models in the sense the word get’s defined in decision theory literature.
For the narrow subset of technical questions, How to Ask Questions the Smart Way is useful.
But if you don’t have a problem to begin with—if your aim is “learn more in field X,” it gets more complicated. Given that you don’t know what questions are worth asking, the best question might be “where would I go to learn more about X” or “what learning material would you recommend on the subject of X?” Then in the process of following and learning from their pointer, generate questions to ask at a later date.
There may be an inherent contradiction between wanting nonspecific knowledge and getting useful, specific answers.
Start by asking the wrong ones. For me, it took a while to notice when I had even a stupid question to ask (possibly some combination of mild social anxiety and generally wanting to come across as smart & well-informed had stifled this impulse), so this might take a little bit of practice.
Sometimes your interlocutor will answer your suboptimal questions, and that will give you time to think of what you really want to know, and possibly a few extra hints for figuring it out. But at least as often your interlocutor will take your interest as a cue that they can just go ahead and tell you nonrelated things about the subject at hand.
What do you want to learn more about? If there isn’t an obvious answer, give yourself some time to see if an answer surfaces.
The good news is that this is the thread for vague questions which might not pan out.
Ask the smartest questions you can think of at the time and keep updating, but don’t waste time on that. After you have done a bit of this, ask them what you are missing, what questions you should be asking them.