Well, it’s an empirical question whether current dis-rationality is more caused by cognitive bias or bounded-rationality with the bound set “too low.” If it’s the latter, then increasing the baseline will improve the correlation between political decisions and truth.
And I know it’s seldom wise to bet against motivated cognition, but if there really were more effective dark arts techniques that could be implemented by the average lawyer, then I would expect that the techniques would already be implemented. There’s already lots at stake in the average lawyer’s job.
What is the difference between a cognitive bias and a bound on rationality? I thought those were two ways of framing the same phenomenon.
I like your theory of efficient dark arts. (I hope you call it the efficient-darkart hypothesis.) I think you’re right that lawyers are already strongly motivated to exploit all effective dark-arts techniques. I was not suggesting the existence of unexploited yet effective techniques. I was suggesting that changing the “baseline” (is this a specific application of raising the sanity waterline?) may increase the effectiveness of certain techniques, from pointlessness to practicability.
Here it is again, more concretely. There would be no point in constructing a fallacious argument, in the language of Bayesian probability, to persuade someone who had no previous understanding of that language. In the present world, that’s almost everyone. So lawyers don’t spend much time concocting pseudo-Bayesian sophisms. But if enough people learn about probability theory, it might pay for lawyers to do just that.
Thus educating lots of people in probability could usher in new fallacies. This is what we should expect from giving motivated thinkers new ways to think—they’ll think in new ways, motivatedly.
As I understand the term, bounded rationality (a.k.a. rational ignorance) refers to the theory that a person might make the rational (perhaps not our definition of rational) decision not to learn more about some topic. Consider Alice. On balance, she has reason to trust the reliability of her education, and her education did not mention existential risk from AI going FOOM (which she has reason to expect would be mentioned if it was a “major” risk). Therefore, she does not educate herself about AI development or advocate for sensible AI policies. If Alice were particularly self-aware, she’d probably agree that any decisions she made about AI would not be rational because of her lack of background knowledge of AI. But that wouldn’t bother her because she doesn’t think that any AI-decisions exist in her life.
Note that the rationality of her ignorance depends on the correctness of her assertion that AI-decisions do not exist in her life. As the Wiki says, “Rational ignorance occurs when the cost of educating oneself on an issue exceeds the potential benefit that the knowledge.” Rational ignorance theory says that this type of ignorance is common across multiple topics.
Compare that to Bob, who has taken AI classes but is not concerned about existential risk from AI because he does not want to believe in existential risk. That’s motivated cognitive. I agree that changing the level of ignorance would change the words in the fallacies that get invoked, but I would expect that the amount of belief in the fallacies was controlled by the amount of motivated cognition, not the amount the audience knew. Consider how explicitly racist arguments are no longer acceptable, but those with motivated cognition towards racism are willing to accept equally unsupported-by-evidence arguments that have the same racist implications. They “know” more, but they don’t choose better.
I thought rational ignorance was a part of bounded rationality—people do not investigate every contingency because they do not have the computational power to do so, and thus their decision-making is bounded by their computational power.
You have distinguished this from motivated cognition, in which people succumb to confirmation bias, seeing only what they want to see. But isn’t a bias just a heuristic, misapplied? And isn’t a heuristic a device for coping with limited computational capacity? It seems that a bias is just a manifestation of bounded rationality, and that this includes confirmation bias and thus motivated cognition.
Yes, bounded-rationality and rational ignorance are consequnces of the limits of human computational power. But humans have more than enough computational power to do better than in-group bias, anchoring effects, deciding when to follow authority simply because it is authority, or believing something because we want it to be true.
We’ve had that capacity since the recorded history began, but ordinary people tend to not notice that they are not considering all the possibilities. By contrast, it’s not uncommon for people to realize that they lack some relevant knowledge. Which isn’t to say that realization is common or easy to get people to admit, but it seems possible to change, which is much less clear for cognitive bias.
Well, it’s an empirical question whether current dis-rationality is more caused by cognitive bias or bounded-rationality with the bound set “too low.” If it’s the latter, then increasing the baseline will improve the correlation between political decisions and truth.
And I know it’s seldom wise to bet against motivated cognition, but if there really were more effective dark arts techniques that could be implemented by the average lawyer, then I would expect that the techniques would already be implemented. There’s already lots at stake in the average lawyer’s job.
What is the difference between a cognitive bias and a bound on rationality? I thought those were two ways of framing the same phenomenon.
I like your theory of efficient dark arts. (I hope you call it the efficient-darkart hypothesis.) I think you’re right that lawyers are already strongly motivated to exploit all effective dark-arts techniques. I was not suggesting the existence of unexploited yet effective techniques. I was suggesting that changing the “baseline” (is this a specific application of raising the sanity waterline?) may increase the effectiveness of certain techniques, from pointlessness to practicability.
Here it is again, more concretely. There would be no point in constructing a fallacious argument, in the language of Bayesian probability, to persuade someone who had no previous understanding of that language. In the present world, that’s almost everyone. So lawyers don’t spend much time concocting pseudo-Bayesian sophisms. But if enough people learn about probability theory, it might pay for lawyers to do just that.
Thus educating lots of people in probability could usher in new fallacies. This is what we should expect from giving motivated thinkers new ways to think—they’ll think in new ways, motivatedly.
As I understand the term, bounded rationality (a.k.a. rational ignorance) refers to the theory that a person might make the rational (perhaps not our definition of rational) decision not to learn more about some topic. Consider Alice. On balance, she has reason to trust the reliability of her education, and her education did not mention existential risk from AI going FOOM (which she has reason to expect would be mentioned if it was a “major” risk). Therefore, she does not educate herself about AI development or advocate for sensible AI policies. If Alice were particularly self-aware, she’d probably agree that any decisions she made about AI would not be rational because of her lack of background knowledge of AI. But that wouldn’t bother her because she doesn’t think that any AI-decisions exist in her life.
Note that the rationality of her ignorance depends on the correctness of her assertion that AI-decisions do not exist in her life. As the Wiki says, “Rational ignorance occurs when the cost of educating oneself on an issue exceeds the potential benefit that the knowledge.” Rational ignorance theory says that this type of ignorance is common across multiple topics.
Compare that to Bob, who has taken AI classes but is not concerned about existential risk from AI because he does not want to believe in existential risk. That’s motivated cognitive. I agree that changing the level of ignorance would change the words in the fallacies that get invoked, but I would expect that the amount of belief in the fallacies was controlled by the amount of motivated cognition, not the amount the audience knew. Consider how explicitly racist arguments are no longer acceptable, but those with motivated cognition towards racism are willing to accept equally unsupported-by-evidence arguments that have the same racist implications. They “know” more, but they don’t choose better.
I thought rational ignorance was a part of bounded rationality—people do not investigate every contingency because they do not have the computational power to do so, and thus their decision-making is bounded by their computational power.
You have distinguished this from motivated cognition, in which people succumb to confirmation bias, seeing only what they want to see. But isn’t a bias just a heuristic, misapplied? And isn’t a heuristic a device for coping with limited computational capacity? It seems that a bias is just a manifestation of bounded rationality, and that this includes confirmation bias and thus motivated cognition.
Yes, bounded-rationality and rational ignorance are consequnces of the limits of human computational power. But humans have more than enough computational power to do better than in-group bias, anchoring effects, deciding when to follow authority simply because it is authority, or believing something because we want it to be true.
We’ve had that capacity since the recorded history began, but ordinary people tend to not notice that they are not considering all the possibilities. By contrast, it’s not uncommon for people to realize that they lack some relevant knowledge. Which isn’t to say that realization is common or easy to get people to admit, but it seems possible to change, which is much less clear for cognitive bias.