None of that sounds to me like what was requested in the grandparent.
Sure, theoretically, biases are worse than perfect rationality. No problem there.
But in practice, is having a bunch of biases directing many of our actions significantly harmful on average, as compared to some other method of bounded rationality? I don’t think I’ve seen a study on this.
No, I read the grandparent, and I doubt I have such a definition.
Yes, people smoke cigarettes, and let’s assume for the sake of argument that it counts as “significantly harmful”. Now imagine (hypothetically) that the same mechanism of thought also causes them to pursue lifestyles that grant them an overall 20% increase in healthy lifespan as compared to nonsmokers. In that scenario, the bias that causes smoking cigarettes is not “significantly harmful on average”.
Now consider another hypothetical where people smoke cigarettes due to their biases, and other people without those biases have a significantly higher incidence of being run over by buses. Then, the biases that cause smoking cigarettes are not “significantly harmful on average” as compared to the alternative.
Of course some of our biases are going to be a hindrance in everyday life.
I see you assuming exactly what taw claims we’re assuming. I don’t see you citing any empirical studies showing that it is the case.
Yes, people smoke cigarettes, and let’s assume for the sake of argument that it counts as “significantly harmful”. Now imagine (hypothetically) that the same mechanism of thought also causes them to pursue lifestyles that grant them an overall 20% increase in healthy lifespan as compared to nonsmokers. In that scenario, the bias that causes smoking cigarettes is not “significantly harmful on average”.
If anything like this were the case, I expect insurance companies would have picked up on it by now. It’s possible to imagine a situation where the same bias balances out to not being harmful, but we have enough evidence to strongly suspect that doesn’t describe the world we live in.
I see you assuming exactly what taw claims we’re assuming. I don’t see you citing any empirical studies showing that it is the case.
I cited the known and well justified behaviour of insurance companies. Compared to that most ‘empirical studies’ are barely more than slightly larger than average anecdotes.
Yes, people smoke cigarettes, and let’s assume for the sake of argument that it counts as “significantly harmful”. Now imagine (hypothetically) that the same mechanism of thought also causes them to pursue lifestyles that grant them an overall 20% increase in healthy lifespan as compared to nonsmokers. In that scenario, the bias that causes smoking cigarettes is not “significantly harmful on average”.
Yes, I could assume that any obvious failure mode of our biases not serving us well in the present day environment is actually balanced out by some deep underlying benefit of that bias that still applies now and that I haven’t thought of yet. But that would be an error somewhere between privileging the hypothesis and outright faith in the anthropomorphised benevolence of our genetic heritage.
Edit: DVNM (Down Vote (of parent as of present time) Not Me!)
Now consider another hypothetical where people smoke cigarettes due to their biases, and other people without those biases have a significantly higher incidence of being run over by buses. Then, the biases that cause smoking cigarettes are not “significantly harmful on average” as compared to the alternative.
If that were true, then the cognitive defect would be the inability to distinguish between the problems of choosing whether to smoke cigarettes and how to avoid being run over by buses. Both the “biased” and “unbiased” people are somehow throwing away enough information about at least one of these problems to make them seem isomorphic in such a way that the effective strategy in one corresponds to the ineffective strategy in the other. The underlying problem behind the bias is throwing away the information.
Yes, people smoke cigarettes, and let’s assume for the sake of argument that it counts as “significantly harmful”. Now imagine (hypothetically) that the same mechanism of thought also causes them to pursue lifestyles that grant them an overall 20% increase in healthy lifespan as compared to nonsmokers. In that scenario, the bias that causes smoking cigarettes is not “significantly harmful on average”.
Now consider another hypothetical where people smoke cigarettes due to their biases, and other people without those biases have a significantly higher incidence of being run over by buses. Then, the biases that cause smoking cigarettes are not “significantly harmful on average” as compared to the alternative.
I can imagine a universe where Omega goes around and gives a hundred dollars to each person who is susceptible to some bias. This doesn’t mean that this example has any connection to the real world in arguing that the bias is somehow a good thing.
Now consider another hypothetical where people smoke cigarettes due to their biases, and other people without those biases have a significantly higher incidence of being run over by buses. Then, the biases that cause smoking cigarettes are not “significantly harmful on average” as compared to the alternative.
In addition to what others said (hypothetical examples are irrelevant for actual reality), it is not clear to what you compare the biases. What does “the same thought mechanism mean” in this case? Thought mechanisms don’t have clear boundaries. If some pattern of thought is beneficial in some situations and harmful in others, we are free to call “bias” only its harmful applications.
None of that sounds to me like what was requested in the grandparent.
Sure, theoretically, biases are worse than perfect rationality. No problem there.
But in practice, is having a bunch of biases directing many of our actions significantly harmful on average, as compared to some other method of bounded rationality? I don’t think I’ve seen a study on this.
You either didn’t read the (relative to this) grandparent or you have an unhealthily bizarre definition of significantly harmful.
No, I read the grandparent, and I doubt I have such a definition.
Yes, people smoke cigarettes, and let’s assume for the sake of argument that it counts as “significantly harmful”. Now imagine (hypothetically) that the same mechanism of thought also causes them to pursue lifestyles that grant them an overall 20% increase in healthy lifespan as compared to nonsmokers. In that scenario, the bias that causes smoking cigarettes is not “significantly harmful on average”.
Now consider another hypothetical where people smoke cigarettes due to their biases, and other people without those biases have a significantly higher incidence of being run over by buses. Then, the biases that cause smoking cigarettes are not “significantly harmful on average” as compared to the alternative.
I see you assuming exactly what taw claims we’re assuming. I don’t see you citing any empirical studies showing that it is the case.
If anything like this were the case, I expect insurance companies would have picked up on it by now. It’s possible to imagine a situation where the same bias balances out to not being harmful, but we have enough evidence to strongly suspect that doesn’t describe the world we live in.
I cited the known and well justified behaviour of insurance companies. Compared to that most ‘empirical studies’ are barely more than slightly larger than average anecdotes.
Yes, I could assume that any obvious failure mode of our biases not serving us well in the present day environment is actually balanced out by some deep underlying benefit of that bias that still applies now and that I haven’t thought of yet. But that would be an error somewhere between privileging the hypothesis and outright faith in the anthropomorphised benevolence of our genetic heritage.
Edit: DVNM (Down Vote (of parent as of present time) Not Me!)
If that were true, then the cognitive defect would be the inability to distinguish between the problems of choosing whether to smoke cigarettes and how to avoid being run over by buses. Both the “biased” and “unbiased” people are somehow throwing away enough information about at least one of these problems to make them seem isomorphic in such a way that the effective strategy in one corresponds to the ineffective strategy in the other. The underlying problem behind the bias is throwing away the information.
I can imagine a universe where Omega goes around and gives a hundred dollars to each person who is susceptible to some bias. This doesn’t mean that this example has any connection to the real world in arguing that the bias is somehow a good thing.
In addition to what others said (hypothetical examples are irrelevant for actual reality), it is not clear to what you compare the biases. What does “the same thought mechanism mean” in this case? Thought mechanisms don’t have clear boundaries. If some pattern of thought is beneficial in some situations and harmful in others, we are free to call “bias” only its harmful applications.