Excellent point about the potential to create polarization by accusing one side of motivated reasoning.
This is tricky, though, because it’s also really distorting our epistemics if we think that’s what’s going on, but won’t say it publicly. My years of research on cognitive biases have made me think that motivated reasoning is ubiquitous, a large effect, and largely unrecognized.
One approach is to always mention motivated reasoning from the other side. Those of us most involved in the x-risk discussion have plenty of emotional reasons to believe in doom. These include in-group loyalty and desire to be proven right once we’ve staked our a position. And laypeople who just fear change may be motivated to see ai as a risk without any real reasoning.
But most doomers are also technophiles and AI enthusiasts. I try to monitor my biases, and I can feel a huge temptation to overrate arguments for safety and finally be able to say “let’s build it!” We tend believe that successful AGi would pretty rapidly usher in a better world, including potentially saving us and our loved ones from the pain and horror of disease and involuntary death. Yet we argue against building it.
It seems like motivated reasoning pushes harder on average in one direction on this issue.
Yet you’re right that accusing people of motivated reasoning sounds like a hostile act if one doesn’t take it as likely that everyone has motivated reasoning. And the possible polarization that could cause would be a real extra problem. Avoiding it might be worth distorting our public epistemics a bit.
An alternative is to say that these are complex issues, and reasoning about likely future events with no real reference class is very difficult, so the arguments themselves must be evaluated very carefully. When they are, arguments for pretty severe risk seem to come out on top.
To be fair, I think that Knightian uncertainty plays a large role here; I think very high p(doom) estimates are just as implausible as very low ones.
Excellent point about the potential to create polarization by accusing one side of motivated reasoning.
This is tricky, though, because it’s also really distorting our epistemics if we think that’s what’s going on, but won’t say it publicly. My years of research on cognitive biases have made me think that motivated reasoning is ubiquitous, a large effect, and largely unrecognized.
One approach is to always mention motivated reasoning from the other side. Those of us most involved in the x-risk discussion have plenty of emotional reasons to believe in doom. These include in-group loyalty and desire to be proven right once we’ve staked our a position. And laypeople who just fear change may be motivated to see ai as a risk without any real reasoning.
But most doomers are also technophiles and AI enthusiasts. I try to monitor my biases, and I can feel a huge temptation to overrate arguments for safety and finally be able to say “let’s build it!” We tend believe that successful AGi would pretty rapidly usher in a better world, including potentially saving us and our loved ones from the pain and horror of disease and involuntary death. Yet we argue against building it.
It seems like motivated reasoning pushes harder on average in one direction on this issue.
Yet you’re right that accusing people of motivated reasoning sounds like a hostile act if one doesn’t take it as likely that everyone has motivated reasoning. And the possible polarization that could cause would be a real extra problem. Avoiding it might be worth distorting our public epistemics a bit.
An alternative is to say that these are complex issues, and reasoning about likely future events with no real reference class is very difficult, so the arguments themselves must be evaluated very carefully. When they are, arguments for pretty severe risk seem to come out on top.
To be fair, I think that Knightian uncertainty plays a large role here; I think very high p(doom) estimates are just as implausible as very low ones.