Something to consider: Most people already agree that AI risk is real and serious. If you’re discussing it in areas where it’s a fringe view, you’re dealing with very unusual people, and might need to put together very different types of arguments, depending on the group. That said...
OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop.
The rest of the website has a lot of well-written stuff.
Nobody understands how modern AI systems do what they do. They are giant, inscrutable matrices of floating-point numbers that we nudge in the direction of better performance until they inexplicably start working. At some point, the companies rushing headlong to scale AI will cough out something that’s smarter than humanity. Nobody knows how to calculate when that will happen. My wild guess is that it will happen after zero to two more breakthroughs the size of transformers.
What happens if we build something smarter than us that we understand that poorly? Some people find it obvious that building something smarter than us that we don’t understand might go badly. Others come in with a very wide range of hopeful thoughts about how it might possibly go well. Even if I had 20 minutes for this talk and months to prepare it, I would not be able to refute all the ways people find to imagine that things might go well.
But I will say that there is no standard scientific consensus for how things will go well. There is no hope that has been widely persuasive and stood up to skeptical examination. There is nothing resembling a real engineering plan for us surviving that I could critique. This is not a good place in which to find ourselves.
And of course, you could appeal to authority by linking the CAIS letter, and maybe the Bletchley Declaration if statements from the international community will mean anything.
(None of those are strictly two-paragraph explanations, but I hope it helps anyway.)
“86% of voters believe AI could accidentally cause a catastrophic event, and 70% agree that mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war”
“76% of voters believe artificial intelligence could eventually pose a threat to the existence of the human race, including 75% of Democrats and 78% of Republicans”
Something to consider: Most people already agree that AI risk is real and serious. If you’re discussing it in areas where it’s a fringe view, you’re dealing with very unusual people, and might need to put together very different types of arguments, depending on the group. That said...
stop.ai’s one-paragraph summary is
The rest of the website has a lot of well-written stuff.
Some might be receptive to things like Yudkowsky’s TED talk:
And of course, you could appeal to authority by linking the CAIS letter, and maybe the Bletchley Declaration if statements from the international community will mean anything.
(None of those are strictly two-paragraph explanations, but I hope it helps anyway.)
I think people are concerned about things like job loss, garbage customer support, election manipulation, etc, not extinction?
AIPI Poll:
“86% of voters believe AI could accidentally cause a catastrophic event, and 70% agree that mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war”
“76% of voters believe artificial intelligence could eventually pose a threat to the existence of the human race, including 75% of Democrats and 78% of Republicans”
Also, this:
“Americans’ top priority is preventing dangerous and catastrophic outcomes from AI”—with relatively few prioritizing things like job loss, bias, etc.