Summary: A perspective that synthesises biases with other considerations incl. game theory, virtue ethics and knowledge of your limitations.
Intro: Adopting a way of thinking in which you are aware of your own biases is clearly important and one of the areas of rationality that is most based solid evidential grounds. However, this also needs to be reconciled with evolutionary psychology arguments that are psychological functioning has evolved to maximise our reproductive fitness, a big part of which is survival. This post attempts to synthesise these two views. I would also like to suggest a new term, De-Centering Bias to describe this technique given how it marks a shift from bias being the central explanation, to bias having to share the stage with other considerations. The best way to understand this is by example:
(This post was originally title Post-Bias Thinking (or Post-Bias-ism). Unsurprisingly, this was controversial, so I decided to rename it.)
Example 1, one of the most famous experiments in psychology is the Stanford Marshmallow Experiment, which was originally interpreted as showing that children who could resist eating one marshmallow now for the promise of two later tended to have better life outcomes. However, later interpretations showed that this could actually be rational for children in some environments where such promises were not reliable and that the presence of such an environment could explain these results by being a common cause of both effects.
Example 2, often revenge will make people take actions that harm both them and the other person. If we model an actor as a self-interested rational agent then we will come to the conclusion that they are being “irrational”. On the other hand, if an actor is willing to go to such extreme lengths to punish someone who wrongs them, then there is a strong incentive not to wrong them in the first place (Scott has argued that it could be considered charitable because it also created a disincentive within the wider community). In the Most Convenient World, the actor will gain the benefits of such a threat existing, whilst never having to carry out their threat.
Example 3, the sunk cost fallacy is the tendency of humans to want to continue a project that they have invested a lot of resources (time, money, effort) into, even if the project is not valuable enough to be worth the resources required to finish the project. When discussing this fallacy we need to be aware that human are not rational agents in that we will often be too lazy to engage in activities that would be worthwhile. Wanting to continue projects in which we have invested large amounts of time in allows us to counter this tendency. So if we were able to press a button and remove sunk cost considerations from our brain, I would not be surprised if this was to make us less effective as an agent (as Elizier says, it is dangerous to be half a rationalist, link, there’s a better link somewhere, but I can’t find it). But further than that, taking a Virtue Ethics approach, every time you complete a project, you become more like the kind of person who completes projects, so sometimes it might be worth completing a project just so that you completed it, rather than for the actual value the project provides. In this case, this bias seems to make us more rational by mitigating a different way in which we are irrational.
De-centering Bias is not:
The belief that “Bias-Centered Thinking” is wrong all or even most of the time, as opposed to being a challenge that forces us to refine our thoughts further.
The belief that all biases have benefits attached. Sometimes attributes evolve merely as side effects. Sometimes they harm us, but not enough to affect our reproductive success (h/t Alwhite).
Limited to game theoretic considerations. See the discussion of the sunk cost fallacy above.
In conclusion, De-Centering Thinking is incredibly simple. All I’m asking you to do is to stop and pause for a second after you’ve been told that something is a bias and think about whether there are any countervailing considerations. I believe that this is an important area to examine as you could probably fill an entire sequence by expanding this analysis to different biases.
Suggestions for Further Discussion: What is something that is generally considered a bias or fallacy, but where you believe that there are also other considerations?
De-Centering Bias
Summary: A perspective that synthesises biases with other considerations incl. game theory, virtue ethics and knowledge of your limitations.
Intro: Adopting a way of thinking in which you are aware of your own biases is clearly important and one of the areas of rationality that is most based solid evidential grounds. However, this also needs to be reconciled with evolutionary psychology arguments that are psychological functioning has evolved to maximise our reproductive fitness, a big part of which is survival. This post attempts to synthesise these two views. I would also like to suggest a new term, De-Centering Bias to describe this technique given how it marks a shift from bias being the central explanation, to bias having to share the stage with other considerations. The best way to understand this is by example:
(This post was originally title Post-Bias Thinking (or Post-Bias-ism). Unsurprisingly, this was controversial, so I decided to rename it.)
Example 1, one of the most famous experiments in psychology is the Stanford Marshmallow Experiment, which was originally interpreted as showing that children who could resist eating one marshmallow now for the promise of two later tended to have better life outcomes. However, later interpretations showed that this could actually be rational for children in some environments where such promises were not reliable and that the presence of such an environment could explain these results by being a common cause of both effects.
Example 2, often revenge will make people take actions that harm both them and the other person. If we model an actor as a self-interested rational agent then we will come to the conclusion that they are being “irrational”. On the other hand, if an actor is willing to go to such extreme lengths to punish someone who wrongs them, then there is a strong incentive not to wrong them in the first place (Scott has argued that it could be considered charitable because it also created a disincentive within the wider community). In the Most Convenient World, the actor will gain the benefits of such a threat existing, whilst never having to carry out their threat.
Example 3, the sunk cost fallacy is the tendency of humans to want to continue a project that they have invested a lot of resources (time, money, effort) into, even if the project is not valuable enough to be worth the resources required to finish the project. When discussing this fallacy we need to be aware that human are not rational agents in that we will often be too lazy to engage in activities that would be worthwhile. Wanting to continue projects in which we have invested large amounts of time in allows us to counter this tendency. So if we were able to press a button and remove sunk cost considerations from our brain, I would not be surprised if this was to make us less effective as an agent (as Elizier says, it is dangerous to be half a rationalist, link, there’s a better link somewhere, but I can’t find it). But further than that, taking a Virtue Ethics approach, every time you complete a project, you become more like the kind of person who completes projects, so sometimes it might be worth completing a project just so that you completed it, rather than for the actual value the project provides. In this case, this bias seems to make us more rational by mitigating a different way in which we are irrational.
De-centering Bias is not:
The belief that “Bias-Centered Thinking” is wrong all or even most of the time, as opposed to being a challenge that forces us to refine our thoughts further.
The belief that all biases have benefits attached. Sometimes attributes evolve merely as side effects. Sometimes they harm us, but not enough to affect our reproductive success (h/t Alwhite).
Limited to game theoretic considerations. See the discussion of the sunk cost fallacy above.
In conclusion, De-Centering Thinking is incredibly simple. All I’m asking you to do is to stop and pause for a second after you’ve been told that something is a bias and think about whether there are any countervailing considerations. I believe that this is an important area to examine as you could probably fill an entire sequence by expanding this analysis to different biases.
Suggestions for Further Discussion: What is something that is generally considered a bias or fallacy, but where you believe that there are also other considerations?