Depends on what exactly on what one means by “experts”, but at least historically expert opinion as defined to mean “experts in AI” seems to me to have performed pretty terribly. They mostly dismissed AGI happening, had timelines that seemed often transparently absurd, and their predictions were extremely framing dependent (the central result from the AI Impacts expert surveys is IMO that experts give timelines that differ by 20 years if you just slightly change the wording of how you are eliciting their probabilities).
Like, 5 years ago you could construct compelling arguments of near expert consensus against risks from AI. So clearly arguments today can’t be that much more robust, unless you have a specific story for why expert beliefs are now a lot smarter.
Sure, but still experts could not agree that AI is quite risky, and they do. This is important evidence in favour, especially to the extent they aren’t your ingroup.
I’m not saying people should consider it a top argument, but I’m surprised how it falls on the ranking.
Agree I could have been clearer here. I was taking the premise of the expert opinion section of the post as given, which is that expert opinion is an argument in-favor of AI existential risk.
Depends on what exactly on what one means by “experts”, but at least historically expert opinion as defined to mean “experts in AI” seems to me to have performed pretty terribly. They mostly dismissed AGI happening, had timelines that seemed often transparently absurd, and their predictions were extremely framing dependent (the central result from the AI Impacts expert surveys is IMO that experts give timelines that differ by 20 years if you just slightly change the wording of how you are eliciting their probabilities).
Like, 5 years ago you could construct compelling arguments of near expert consensus against risks from AI. So clearly arguments today can’t be that much more robust, unless you have a specific story for why expert beliefs are now a lot smarter.
Sure, but still experts could not agree that AI is quite risky, and they do. This is important evidence in favour, especially to the extent they aren’t your ingroup.
I’m not saying people should consider it a top argument, but I’m surprised how it falls on the ranking.
.
Agree I could have been clearer here. I was taking the premise of the expert opinion section of the post as given, which is that expert opinion is an argument in-favor of AI existential risk.