When it comes to the downside risk, it’s often that there are more unknown unknown that produce harm then positive unknown unknown. People are usually biased to overestimate the positive effects and underestimate the negative effects for the known unknown.
This seems plausible to me. Would you like to expand on why you think this is the case?
The asymmetry between creation and destruction? (I.e., it’s harder to build than it is to destroy.)
There are multiple reasons. Let’s say you have nine different courses of action and all have utility −1. You have some error function when evaluating the utility of the actions and you think the options have utilities −5, −4, −3, −2, −1, 0, 1, 2, 3. All the negative options won’t be on your mind and you will only think about doing those options that score highly.
Even if you have some options that are actually benefitial if your evaluation function has enough noise, the fact that you don’t put any attention on the options that score negatively means that the options that you do consider are biased.
Confirmation bias will make you further want to believe that the option that you persue are positive.
Most systems in our modern world are not anti-fragile and suffer if you expose them to random noise.
I find the idea in those first two paragraphs quite interesting. It seems plausible, and isn’t something I’d thought of before. It sounds like it’s essentially applying the underlying idea of the optimiser’s/winner’s/unilateralist’s curse to one person evaluating a set of options, rather than to a set of people evaluating one option?
I also think confirmation bias or related things will tend to bias people towards thinking options they’ve picked, or are already leaning towards picking, are good. Though it’s less clear that confirmation bias will play a role when a person has only just began evaluating the options.
Most systems in our modern world are not anti-fragile and suffer if you expose them to random noise.
This sounds more like a reason why many actions (or a “random action”) will make things worse (which seems quite plausible to me), rather than a reason why people would be biased to overestimate benefits and underestimate harms from actions. Though I guess perhaps people’s failure to recognise this reason why many/random actions may make things worse, despite this reason being real, will then lead to them systematically overestimating how positive actions will be.
In any case, I can also think of biases that could push in the opposite direction. E.g., negativity bias and status quo bias. My guess would be there are some people and domains where, on net, there tends to be a bias towards overestimating the value of actions, and some people and domains where the opposite is true. And I doubt we could get a strong sense of how it all plays out just by theorising; we’d need some empirical work. (Incidentally, Convergence should also be releasing a somewhat related post soon, which will outline 5 potential causes of too little caution about information hazards, and 5 potential causes of too much caution.)
Finally, it seems worth noting that, if we do have reason to believe that, by default, people tend to overestimate the benefits and underestimate the harms that an action will cause, that wouldn’t necessarily mean we should abandon the pure EV perspective. Instead, we could just incorporate an adjustment to our naive EV assessments to account for that tendency/bias, in the same way we should adjust for the unilateralist’s curse in many situations. And the same would be true if it turned out that, by default, people had the opposite bias. (Though if there are these biases, that could mean it’d be unwise to promote the pure EV perspective without also highlighting the bias that needs adjusting for.)
Yes, this seems plausible to me. What I was saying is that that would be a reason why the EV of arbitrary actions might often be negative, rather than directly being a reason why people will overestimate the EV of arbitrary actions. The claim “People should take the pure EV perspective” is consistent with the claim “A large portion of actions have negative EV and shouldn’t be taken”. This is because taking the pure EV perspective would involve assessing both the benefits and risks (which could include adjusting for the chance of many unknown unknowns that would lead to harm), and then deciding against doing actions that appear negative.
This seems plausible to me. Would you like to expand on why you think this is the case?
The asymmetry between creation and destruction? (I.e., it’s harder to build than it is to destroy.)
There are multiple reasons. Let’s say you have nine different courses of action and all have utility −1. You have some error function when evaluating the utility of the actions and you think the options have utilities −5, −4, −3, −2, −1, 0, 1, 2, 3. All the negative options won’t be on your mind and you will only think about doing those options that score highly.
Even if you have some options that are actually benefitial if your evaluation function has enough noise, the fact that you don’t put any attention on the options that score negatively means that the options that you do consider are biased.
Confirmation bias will make you further want to believe that the option that you persue are positive.
Most systems in our modern world are not anti-fragile and suffer if you expose them to random noise.
I find the idea in those first two paragraphs quite interesting. It seems plausible, and isn’t something I’d thought of before. It sounds like it’s essentially applying the underlying idea of the optimiser’s/winner’s/unilateralist’s curse to one person evaluating a set of options, rather than to a set of people evaluating one option?
I also think confirmation bias or related things will tend to bias people towards thinking options they’ve picked, or are already leaning towards picking, are good. Though it’s less clear that confirmation bias will play a role when a person has only just began evaluating the options.
This sounds more like a reason why many actions (or a “random action”) will make things worse (which seems quite plausible to me), rather than a reason why people would be biased to overestimate benefits and underestimate harms from actions. Though I guess perhaps people’s failure to recognise this reason why many/random actions may make things worse, despite this reason being real, will then lead to them systematically overestimating how positive actions will be.
In any case, I can also think of biases that could push in the opposite direction. E.g., negativity bias and status quo bias. My guess would be there are some people and domains where, on net, there tends to be a bias towards overestimating the value of actions, and some people and domains where the opposite is true. And I doubt we could get a strong sense of how it all plays out just by theorising; we’d need some empirical work. (Incidentally, Convergence should also be releasing a somewhat related post soon, which will outline 5 potential causes of too little caution about information hazards, and 5 potential causes of too much caution.)
Finally, it seems worth noting that, if we do have reason to believe that, by default, people tend to overestimate the benefits and underestimate the harms that an action will cause, that wouldn’t necessarily mean we should abandon the pure EV perspective. Instead, we could just incorporate an adjustment to our naive EV assessments to account for that tendency/bias, in the same way we should adjust for the unilateralist’s curse in many situations. And the same would be true if it turned out that, by default, people had the opposite bias. (Though if there are these biases, that could mean it’d be unwise to promote the pure EV perspective without also highlighting the bias that needs adjusting for.)
As far as the third point goes for most non-anti-fragile systems the effects of unknown unknowns are more likely to be harmful then benefitial.
Yes, this seems plausible to me. What I was saying is that that would be a reason why the EV of arbitrary actions might often be negative, rather than directly being a reason why people will overestimate the EV of arbitrary actions. The claim “People should take the pure EV perspective” is consistent with the claim “A large portion of actions have negative EV and shouldn’t be taken”. This is because taking the pure EV perspective would involve assessing both the benefits and risks (which could include adjusting for the chance of many unknown unknowns that would lead to harm), and then deciding against doing actions that appear negative.