I don’t think I know of any person who’s demonstrated this who thinks risk is under, say, 10%
If you mean risk of extinction or existential catastrophe from AI at the time AI is developed, it seems really hard to say, as I think that that’s been estimated even less often than other aspects of AI risk (e.g. risk this century) or x-risk as a whole.
I think the only people (maybe excluding commenters who don’t work on this professionally) who’ve clearly given a greater than 10% estimate for this are:
Buck Schlegris (50%)
Stuart Armstrong (33-50% chance humanity doesn’t survive AI)
Toby Ord (10% existential riskfrom AI this century, but 20% for when the AI transition happens)
Meanwhile, people who I think have effectively given <10% estimates for that (judging from estimates that weren’t conditioning on when AI was developed; all from my database):
Very likely MacAskill (well below 10% for extinction as a whole in the 21st century)
Very likely Ben Garfinkel (0-1% x-catastrophe from AI this century)
Probably the median FHI 2008 survey respondent (5% for AI extinction in the 21st century)
Probably Pamlin & Armstrong in a report (0-10% for unrecoverable collapse extinction from AI this century)
But then Armstrong separately gave a higher estimate
And I haven’t actually read the Pamlin & Armstrong report
Maybe Rohin Shah (some estimates in a comment thread)
(Maybe Hanson would also give <10%, but I haven’t seen explicit estimates from him, and his reduced focus on and “doominess” from AI may be because he thinks timelines are longer and other things may happen first.)
I’d personally consider all the people I’ve listed to have demonstrated at least a fairly good willingness and ability to reason seriously about the future, though there’s perhaps room for reasonable disagreement here. (With the caveat that I don’t know Pamlin and don’t know precisely who was in the FHI survey.)
If you mean risk of extinction or existential catastrophe from AI at the time AI is developed, it seems really hard to say, as I think that that’s been estimated even less often than other aspects of AI risk (e.g. risk this century) or x-risk as a whole.
I think the only people (maybe excluding commenters who don’t work on this professionally) who’ve clearly given a greater than 10% estimate for this are:
Buck Schlegris (50%)
Stuart Armstrong (33-50% chance humanity doesn’t survive AI)
Toby Ord (10% existential risk from AI this century, but 20% for when the AI transition happens)
Meanwhile, people who I think have effectively given <10% estimates for that (judging from estimates that weren’t conditioning on when AI was developed; all from my database):
Very likely MacAskill (well below 10% for extinction as a whole in the 21st century)
Very likely Ben Garfinkel (0-1% x-catastrophe from AI this century)
Probably the median FHI 2008 survey respondent (5% for AI extinction in the 21st century)
Probably Pamlin & Armstrong in a report (0-10% for unrecoverable collapse extinction from AI this century)
But then Armstrong separately gave a higher estimate
And I haven’t actually read the Pamlin & Armstrong report
Maybe Rohin Shah (some estimates in a comment thread)
(Maybe Hanson would also give <10%, but I haven’t seen explicit estimates from him, and his reduced focus on and “doominess” from AI may be because he thinks timelines are longer and other things may happen first.)
I’d personally consider all the people I’ve listed to have demonstrated at least a fairly good willingness and ability to reason seriously about the future, though there’s perhaps room for reasonable disagreement here. (With the caveat that I don’t know Pamlin and don’t know precisely who was in the FHI survey.)