There are good ways to argue that AI X-risk is not an extraordinary claim, but this is not it. Besides for that “a derivation from these 5 axioms” does not make a claim “ordinary”, the axioms themselves are pretty suspect or at least not simple.
“AI gets better, never worse” does not automatically imply to everyone that it gets better forever, or that it will soon surpass humans. “Intelligence always helps” is true, but non-obvious to many people. “No one knows how to align AI” is something that some would strongly disagree with, not having seen their personal idea disproved. “Resources are finite” jumps straight to some conclusions that require justification, including assumptions about the AI’s goals. “AI cannot be stopped” is strongly counter-intuitive to most people, especially since they’ve been watching movies about just that for their whole lives.
And none of these arguments are even necessary, because AI being risky is the normal position in society. The average person believes that there are dangers, even if polls are inconsistent about whether an absolute majority often worries particularly about AI wiping out humanity. The AI optimist’s position is the “weird”, “extraordinary” one.
Contrast the post with the argument from stopai’s homepage: “OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop.” In that framing, it is hard to argue that it’s an extraordinary claim.
There are good ways to argue that AI X-risk is not an extraordinary claim, but this is not it. Besides for that “a derivation from these 5 axioms” does not make a claim “ordinary”, the axioms themselves are pretty suspect or at least not simple.
“AI gets better, never worse” does not automatically imply to everyone that it gets better forever, or that it will soon surpass humans. “Intelligence always helps” is true, but non-obvious to many people. “No one knows how to align AI” is something that some would strongly disagree with, not having seen their personal idea disproved. “Resources are finite” jumps straight to some conclusions that require justification, including assumptions about the AI’s goals. “AI cannot be stopped” is strongly counter-intuitive to most people, especially since they’ve been watching movies about just that for their whole lives.
And none of these arguments are even necessary, because AI being risky is the normal position in society. The average person believes that there are dangers, even if polls are inconsistent about whether an absolute majority often worries particularly about AI wiping out humanity. The AI optimist’s position is the “weird”, “extraordinary” one.
Contrast the post with the argument from stopai’s homepage: “OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop.” In that framing, it is hard to argue that it’s an extraordinary claim.