What aspect of AI risk is deemed existential by these signatories? I doubt that they all agree on that point. Your publication “An Overview of Catastrophic AI Risks” lists quite a few but doesn’t differentiate between theoretical and actual.
Perhaps if you were to create a spreadsheet with a list of each of the risks mentioned in your paper but with the further identification of each as actual or theoretical, and ask each of those 300 luminaries to rate them in terms of probability, then you’d have something a lot more useful.
“An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population.”—FLI
What aspect of AI risk is deemed existential by these signatories? I doubt that they all agree on that point. Your publication “An Overview of Catastrophic AI Risks” lists quite a few but doesn’t differentiate between theoretical and actual.
Perhaps if you were to create a spreadsheet with a list of each of the risks mentioned in your paper but with the further identification of each as actual or theoretical, and ask each of those 300 luminaries to rate them in terms of probability, then you’d have something a lot more useful.
The statement does not mention existential risk, but rather “the risk of extinction from AI”.
Which makes it an existential risk.
“An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population.”—FLI