We will develop better-than-human AGI that pursues goals autonomously
Those goals won’t match human goals closely enough
Doom of some sort
This is one of the better short argument for AI doom I have heard so far. It neither obviously makes AI doom seem overly likely or unlikely.
In contrast, if one presents reasons for doom (or really most of anything) as a long list, the conclusion tends to seem either very likely or very unlikely, depending on whether it follows from the disjunction or the conjunction of the given reasons. I.e. whether we have a long list of statements that are sufficient, or a long list of statements that are necessary for AI doom.
It seems therefore that people who think AI risk is low and those who think it is high are much more likely to agree on presenting the AI doom case in terms of a short argument than in terms of a long argument. Then they merely disagree about the conclusion, but not about the form of the argument itself. Which could help a lot with identifying object level disagreements.
This is one of the better short argument for AI doom I have heard so far. It neither obviously makes AI doom seem overly likely or unlikely.
In contrast, if one presents reasons for doom (or really most of anything) as a long list, the conclusion tends to seem either very likely or very unlikely, depending on whether it follows from the disjunction or the conjunction of the given reasons. I.e. whether we have a long list of statements that are sufficient, or a long list of statements that are necessary for AI doom.
It seems therefore that people who think AI risk is low and those who think it is high are much more likely to agree on presenting the AI doom case in terms of a short argument than in terms of a long argument. Then they merely disagree about the conclusion, but not about the form of the argument itself. Which could help a lot with identifying object level disagreements.