I think there is a valid issue in our biases and irrational behavior being dependent on each other. My typical example is the planning fallacy combined with the fact that we feel loss more strongly than gain (and that pain can quickly gets higher than pleasure). Since it’s much easier to feel pain than pleasure, by trying to maximize pleasure, we should be tempted to not take risks. But if in addition with underestimate risks, then we’ll end up taking some risks.
More generally, I think it comes from evolution not being perfect design. It’ll grow brains full of bugs. But then, another bug that lower the important of a previous bug will have a net advantage, and will spread. But once you have the two bugs, a fix for just one will have a net disadvantage, and will not spread.
It’s something that occurs in computer programs too—programmers among us will remember cases in which removing a bug allowed to discover another bug, which was before compensating the first bug. Like a bug that was reverting a sort order, and another bug which another part in the program was reverting it again, leading to the correct order. Or a “off by one” in one direction at one point, compensated by an “off by one” in the other direction at another.
But from all that it comes that debiasing or rationality is not harmful in itself, it’s the intermediate state where you’re partly debiased which is unstable. Of course, it creates lots of issues in practically trying to debias yourself (or someone).
I think there is a valid issue in our biases and irrational behavior being dependent on each other. My typical example is the planning fallacy combined with the fact that we feel loss more strongly than gain (and that pain can quickly gets higher than pleasure). Since it’s much easier to feel pain than pleasure, by trying to maximize pleasure, we should be tempted to not take risks. But if in addition with underestimate risks, then we’ll end up taking some risks.
More generally, I think it comes from evolution not being perfect design. It’ll grow brains full of bugs. But then, another bug that lower the important of a previous bug will have a net advantage, and will spread. But once you have the two bugs, a fix for just one will have a net disadvantage, and will not spread.
It’s something that occurs in computer programs too—programmers among us will remember cases in which removing a bug allowed to discover another bug, which was before compensating the first bug. Like a bug that was reverting a sort order, and another bug which another part in the program was reverting it again, leading to the correct order. Or a “off by one” in one direction at one point, compensated by an “off by one” in the other direction at another.
But from all that it comes that debiasing or rationality is not harmful in itself, it’s the intermediate state where you’re partly debiased which is unstable. Of course, it creates lots of issues in practically trying to debias yourself (or someone).