As I understand the term, bounded rationality (a.k.a. rational ignorance) refers to the theory that a person might make the rational (perhaps not our definition of rational) decision not to learn more about some topic. Consider Alice. On balance, she has reason to trust the reliability of her education, and her education did not mention existential risk from AI going FOOM (which she has reason to expect would be mentioned if it was a “major” risk). Therefore, she does not educate herself about AI development or advocate for sensible AI policies. If Alice were particularly self-aware, she’d probably agree that any decisions she made about AI would not be rational because of her lack of background knowledge of AI. But that wouldn’t bother her because she doesn’t think that any AI-decisions exist in her life.
Note that the rationality of her ignorance depends on the correctness of her assertion that AI-decisions do not exist in her life. As the Wiki says, “Rational ignorance occurs when the cost of educating oneself on an issue exceeds the potential benefit that the knowledge.” Rational ignorance theory says that this type of ignorance is common across multiple topics.
Compare that to Bob, who has taken AI classes but is not concerned about existential risk from AI because he does not want to believe in existential risk. That’s motivated cognitive. I agree that changing the level of ignorance would change the words in the fallacies that get invoked, but I would expect that the amount of belief in the fallacies was controlled by the amount of motivated cognition, not the amount the audience knew. Consider how explicitly racist arguments are no longer acceptable, but those with motivated cognition towards racism are willing to accept equally unsupported-by-evidence arguments that have the same racist implications. They “know” more, but they don’t choose better.
I thought rational ignorance was a part of bounded rationality—people do not investigate every contingency because they do not have the computational power to do so, and thus their decision-making is bounded by their computational power.
You have distinguished this from motivated cognition, in which people succumb to confirmation bias, seeing only what they want to see. But isn’t a bias just a heuristic, misapplied? And isn’t a heuristic a device for coping with limited computational capacity? It seems that a bias is just a manifestation of bounded rationality, and that this includes confirmation bias and thus motivated cognition.
Yes, bounded-rationality and rational ignorance are consequnces of the limits of human computational power. But humans have more than enough computational power to do better than in-group bias, anchoring effects, deciding when to follow authority simply because it is authority, or believing something because we want it to be true.
We’ve had that capacity since the recorded history began, but ordinary people tend to not notice that they are not considering all the possibilities. By contrast, it’s not uncommon for people to realize that they lack some relevant knowledge. Which isn’t to say that realization is common or easy to get people to admit, but it seems possible to change, which is much less clear for cognitive bias.
As I understand the term, bounded rationality (a.k.a. rational ignorance) refers to the theory that a person might make the rational (perhaps not our definition of rational) decision not to learn more about some topic. Consider Alice. On balance, she has reason to trust the reliability of her education, and her education did not mention existential risk from AI going FOOM (which she has reason to expect would be mentioned if it was a “major” risk). Therefore, she does not educate herself about AI development or advocate for sensible AI policies. If Alice were particularly self-aware, she’d probably agree that any decisions she made about AI would not be rational because of her lack of background knowledge of AI. But that wouldn’t bother her because she doesn’t think that any AI-decisions exist in her life.
Note that the rationality of her ignorance depends on the correctness of her assertion that AI-decisions do not exist in her life. As the Wiki says, “Rational ignorance occurs when the cost of educating oneself on an issue exceeds the potential benefit that the knowledge.” Rational ignorance theory says that this type of ignorance is common across multiple topics.
Compare that to Bob, who has taken AI classes but is not concerned about existential risk from AI because he does not want to believe in existential risk. That’s motivated cognitive. I agree that changing the level of ignorance would change the words in the fallacies that get invoked, but I would expect that the amount of belief in the fallacies was controlled by the amount of motivated cognition, not the amount the audience knew. Consider how explicitly racist arguments are no longer acceptable, but those with motivated cognition towards racism are willing to accept equally unsupported-by-evidence arguments that have the same racist implications. They “know” more, but they don’t choose better.
I thought rational ignorance was a part of bounded rationality—people do not investigate every contingency because they do not have the computational power to do so, and thus their decision-making is bounded by their computational power.
You have distinguished this from motivated cognition, in which people succumb to confirmation bias, seeing only what they want to see. But isn’t a bias just a heuristic, misapplied? And isn’t a heuristic a device for coping with limited computational capacity? It seems that a bias is just a manifestation of bounded rationality, and that this includes confirmation bias and thus motivated cognition.
Yes, bounded-rationality and rational ignorance are consequnces of the limits of human computational power. But humans have more than enough computational power to do better than in-group bias, anchoring effects, deciding when to follow authority simply because it is authority, or believing something because we want it to be true.
We’ve had that capacity since the recorded history began, but ordinary people tend to not notice that they are not considering all the possibilities. By contrast, it’s not uncommon for people to realize that they lack some relevant knowledge. Which isn’t to say that realization is common or easy to get people to admit, but it seems possible to change, which is much less clear for cognitive bias.