This seems to be something of a fake explanation. The statement is: Sometimes the brain provides false information, that actually answers a different question to the one asked.
That would be true no matter what answer was being given (unless it was completely random), because so long as the answer thrown up correlates with something, you can say: “Aha! The brain has substituted something for the question you thought was being asked!”
And since this explanation could be given for any answer the brain throws up, it doesn’t actually give us any new information about the cognitive algorithm being used.
That would be true no matter what answer was being given (unless it was completely random), because so long as the answer thrown up correlates with something, you can say: “Aha! The brain has substituted something for the question you thought was being asked!”
Well… in principle, yes. But that seems a little uncharitable—do you think I actually committed that mistake in any of the examples in gave? I.e. do you actually think that it seems like a wrong explanation for any of those cases? If yes, you should just point them out. If not, well, obviously any explanation can be rationalized and overextended beyond its domain. But it being possible to misapply something doesn’t make the thing itself bad.
You are right in the sense that the substitution principle provides, by itself, rather little information. It only says that an exact analysis is being replaced with an easier, heuristic analysis, but doesn’t say anything about what that heuristic is. To figure out that requires separate study.
But the thing that distinguishes a fake explanation from a real explanation is that a fake explanation isn’t actually useful for anything, for it doesn’t constrain your anticipations. That isn’t the case here. The substitution principle provides us with the (obvious in retrospect) anticipation that “if your brain instantly returns an answer to a difficult problem, the answer is most likely a simplified one”, which provides a rule of thumb which is useful for noticing when you might be mistaken. (This anticipation could be disproven if it turned out that all such answers were actually answers to the questions being asked.) While learning about specific biases/heuristics might be more effective in teaching you to notice them, such learning is also more narrow in scope. The substitution principle, in contrast, requires more thought but also makes it easier to spot heuristics you haven’t learned about before.
I’m claiming that the substitution principle isn’t so much a model as a (non-information-adding) rephrasing of the existence of heuristics, so there isn’t much of a “mistake” to be made—the rephrasing isn’t actually wrong, just unhelpful.
Unless of course you actually think the new question is explicitly represented in the brain in a similar way that a question read or heard would be, in which case I think you’ve made that mistake in every single one of your examples, unless you have data to back up that assumption.
But it being possible to misapply something doesn’t make the thing itself bad.
Being possible to easily misapply an explanation does make it bad, because that means it’s not anticipation-constraining.
The substitution principle provides us with the (obvious in retrospect) anticipation that “if your brain instantly returns an answer to a difficult problem, the answer is most likely a simplified one”
This is exactly what I’d expect the moment after learning of the existence of natural heuristics. If the brain is answering a question in less time than I’d expect it to take to calculate/retrieve the answer, obviously it’s doing something else. What this post seems to be trying to add is that “doing something else” can be refined to “answering a different question”—but since the brain is providing output of type Answer, any output will be “answering a different question”, so it’s not actually a refinement.
It’s possible you just wanted to explicitly state a principle that happened to be implicitly obvious to me, in which case we have no disagreement. But the length of the post and the fact that you bothered to cite Kahneman seem to me to indicate that you’re trying to say something more substantial, in which case I’ve missed it.
I disagree that it fails to be a model. It predicts what types of information will be used by the agent (i.e. answers to simpler questions). Though Kahneman’s book presents all this is in a glossed-over, readable way, his actual research papers do combine this with anchoring effects to specifically control and test that certain answers to anchor questions are being substituted for answers to more difficult questions. It’s actually quite powerful as a model.
I’m no expert on this, but one main thing he uses as a proxy is pupil dilation. In arithmetical tasks, there is a strong correlation between pupil dilation, reported mental effort required to finish the task, and time taken to finish the task. So, the same person, when asked to do an unnatural modular arithmetic problem, will express more dilated pupils, report having a harder time, and take longer, than to solve a similar but more familiar numerical problem like adding large numbers. Kahneman applied similar approaches to moral and ethical questions.
I don’t dispute that his findings are difficult to interpret and that many people (including Kahneman) probably overstretch to make them fit a compelling story (Kahneman even admits as much when discussing the narrative bias). But the overall model that when your System 1 hits a cognitive wall it demands that System 2 give it a compelling story about why this is so seems to be well confirmed. If you’re willing to accept that things like pupil dilation are a proxy for how difficult a question feels to the answerer, then the anchoring-controlled experiments show that System 2 is lazy and wants the cheapest, quickest answer for a hard problem and it will often substitute an answer for a question it was already thinking about, or a question of considerably less cognitive strain (as measured by pupil dilation, etc.)
It’s possible you just wanted to explicitly state a principle that happened to be implicitly obvious to me, in which case we have no disagreement.
I think this is the case. The principle seems to me obvious in retrospect, but it did not feel obvious before I’d read Kahneman.
Also, I was thinking about using the principle as a tool for teaching rationality, and this post was to some extent written as an early draft “how would I explain biases and heuristics to someone who’s never heard about them before” article, to be followed by concrete exercises of the Sunk Costs type, which I’m about to start designing next.
The “substitution principle” isn’t as trivial as both of you conclude. The claim is that an easier to answer question is substituted for the actual question when System 1 can’t answer the harder question. That’s not the same as saying a simplifying heuristic is involved. The difference is that to accord with the substitution principle, you simplify the question, which you then use valid means to ascertain. In the case of generic heuristic substitution, you use a heuristic that may not answer any plausible question—except as an approximation. The “substitution principle” constrains the candidate heuristics further (by limiting them to exact answers to substituted questions) than do ordinary heuristics. The “substitution principle” is an elegant theory, although don’t know whether it’s true. (I haven’t read Kahnemann’s newest book.)
A better summary might be: if you intuit a quick answer to a complex question, it’s often instead an answer to a related question about your current mental/emotional state, and as such is subject to noticeable biases.
This seems to be something of a fake explanation. The statement is: Sometimes the brain provides false information, that actually answers a different question to the one asked.
That would be true no matter what answer was being given (unless it was completely random), because so long as the answer thrown up correlates with something, you can say: “Aha! The brain has substituted something for the question you thought was being asked!”
And since this explanation could be given for any answer the brain throws up, it doesn’t actually give us any new information about the cognitive algorithm being used.
Well… in principle, yes. But that seems a little uncharitable—do you think I actually committed that mistake in any of the examples in gave? I.e. do you actually think that it seems like a wrong explanation for any of those cases? If yes, you should just point them out. If not, well, obviously any explanation can be rationalized and overextended beyond its domain. But it being possible to misapply something doesn’t make the thing itself bad.
You are right in the sense that the substitution principle provides, by itself, rather little information. It only says that an exact analysis is being replaced with an easier, heuristic analysis, but doesn’t say anything about what that heuristic is. To figure out that requires separate study.
But the thing that distinguishes a fake explanation from a real explanation is that a fake explanation isn’t actually useful for anything, for it doesn’t constrain your anticipations. That isn’t the case here. The substitution principle provides us with the (obvious in retrospect) anticipation that “if your brain instantly returns an answer to a difficult problem, the answer is most likely a simplified one”, which provides a rule of thumb which is useful for noticing when you might be mistaken. (This anticipation could be disproven if it turned out that all such answers were actually answers to the questions being asked.) While learning about specific biases/heuristics might be more effective in teaching you to notice them, such learning is also more narrow in scope. The substitution principle, in contrast, requires more thought but also makes it easier to spot heuristics you haven’t learned about before.
I’m claiming that the substitution principle isn’t so much a model as a (non-information-adding) rephrasing of the existence of heuristics, so there isn’t much of a “mistake” to be made—the rephrasing isn’t actually wrong, just unhelpful.
Unless of course you actually think the new question is explicitly represented in the brain in a similar way that a question read or heard would be, in which case I think you’ve made that mistake in every single one of your examples, unless you have data to back up that assumption.
Being possible to easily misapply an explanation does make it bad, because that means it’s not anticipation-constraining.
This is exactly what I’d expect the moment after learning of the existence of natural heuristics. If the brain is answering a question in less time than I’d expect it to take to calculate/retrieve the answer, obviously it’s doing something else. What this post seems to be trying to add is that “doing something else” can be refined to “answering a different question”—but since the brain is providing output of type Answer, any output will be “answering a different question”, so it’s not actually a refinement.
It’s possible you just wanted to explicitly state a principle that happened to be implicitly obvious to me, in which case we have no disagreement. But the length of the post and the fact that you bothered to cite Kahneman seem to me to indicate that you’re trying to say something more substantial, in which case I’ve missed it.
I disagree that it fails to be a model. It predicts what types of information will be used by the agent (i.e. answers to simpler questions). Though Kahneman’s book presents all this is in a glossed-over, readable way, his actual research papers do combine this with anchoring effects to specifically control and test that certain answers to anchor questions are being substituted for answers to more difficult questions. It’s actually quite powerful as a model.
What definition does Kahneman use for “simpler”?
I’m no expert on this, but one main thing he uses as a proxy is pupil dilation. In arithmetical tasks, there is a strong correlation between pupil dilation, reported mental effort required to finish the task, and time taken to finish the task. So, the same person, when asked to do an unnatural modular arithmetic problem, will express more dilated pupils, report having a harder time, and take longer, than to solve a similar but more familiar numerical problem like adding large numbers. Kahneman applied similar approaches to moral and ethical questions.
I don’t dispute that his findings are difficult to interpret and that many people (including Kahneman) probably overstretch to make them fit a compelling story (Kahneman even admits as much when discussing the narrative bias). But the overall model that when your System 1 hits a cognitive wall it demands that System 2 give it a compelling story about why this is so seems to be well confirmed. If you’re willing to accept that things like pupil dilation are a proxy for how difficult a question feels to the answerer, then the anchoring-controlled experiments show that System 2 is lazy and wants the cheapest, quickest answer for a hard problem and it will often substitute an answer for a question it was already thinking about, or a question of considerably less cognitive strain (as measured by pupil dilation, etc.)
I think this is the case. The principle seems to me obvious in retrospect, but it did not feel obvious before I’d read Kahneman.
Also, I was thinking about using the principle as a tool for teaching rationality, and this post was to some extent written as an early draft “how would I explain biases and heuristics to someone who’s never heard about them before” article, to be followed by concrete exercises of the Sunk Costs type, which I’m about to start designing next.
The “substitution principle” isn’t as trivial as both of you conclude. The claim is that an easier to answer question is substituted for the actual question when System 1 can’t answer the harder question. That’s not the same as saying a simplifying heuristic is involved. The difference is that to accord with the substitution principle, you simplify the question, which you then use valid means to ascertain. In the case of generic heuristic substitution, you use a heuristic that may not answer any plausible question—except as an approximation. The “substitution principle” constrains the candidate heuristics further (by limiting them to exact answers to substituted questions) than do ordinary heuristics. The “substitution principle” is an elegant theory, although don’t know whether it’s true. (I haven’t read Kahnemann’s newest book.)
A better summary might be: if you intuit a quick answer to a complex question, it’s often instead an answer to a related question about your current mental/emotional state, and as such is subject to noticeable biases.