Depends on what exactly on what one means by “experts”, but at least historically expert opinion as defined to mean “experts in AI” seems to me to have performed pretty terribly. They mostly dismissed AGI happening, had timelines that seemed often transparently absurd, and their predictions were extremely framing dependent (the central result from the AI Impacts expert surveys is IMO that experts give timelines that differ by 20 years if you just slightly change the wording of how you are eliciting their probabilities).
Like, 5 years ago you could construct compelling arguments of near expert consensus against risks from AI. So clearly arguments today can’t be that much more robust, unless you have a specific story for why expert beliefs are now a lot smarter.
Sure, but still experts could not agree that AI is quite risky, and they do. This is important evidence in favour, especially to the extent they aren’t your ingroup.
I’m not saying people should consider it a top argument, but I’m surprised how it falls on the ranking.
Agree I could have been clearer here. I was taking the premise of the expert opinion section of the post as given, which is that expert opinion is an argument in-favor of AI existential risk.
AI isn’t dangerous because of what experts think, and the arguments that persuaded the experts themselves are not“experts think this”. It would have been a misleading argument for Eliezer in 2000 being among the first people to think about it in the modern way, or for people who weren’t already rats in maybe 2017 before GPT was in the news and when AI x-risk was very niche.
I also have objections to its usefulness as an argument; “experts think this” doesn’t give me any inside view of the problem by which I can come up with novel solutions that the experts haven’t thought of. I think this especially comes up if the solutions might be precise or extreme; if I was an alignment researcher, “experts think this” would tell me nothing about what math I should be writing, and if I was a politician, “experts think this” would be less likely to get me to come up with solutions that I think would work rather than solutions that are compromising between the experts coalition and my other constituents.
So, while it is evidence (experts aren’t anticorrelated with the truth), there’s better reasoning available that’s more entangled with the truth and gives more precise answers.
Most such ‘experts’ have knowledge and metis in how to do engineering with machine learning, not in predicting the outcomes of future scientific insights that may or may not happen, especially when asked about questions like ‘is this research going to cause an event whose measured impacts will be larger in scope than the industrial revolution’. I don’t believe that there are relevant experts, nor that I should straightforwardly defer to the body of people with a related-but-distinct topic of expertise.
Often there are many epistemic corruptions within large institutions of people; they can easily be borked in substantive ways that sometimes allow me to believe that they’re mistaken and untrustworthy on some question. I am not saying this is true for all questions, but my sense is that most ML people are operating in far mode when thinking about the future of AGI and that a lot of the strings and floats they output when prompted are not very related to reality.
Expert opinion is an argument for people who are not themselves particularly informed about the topic. For everyone else, it basically turns into an authority fallacy.
I’m surprised that the least compelling argument here is Expert opinion.
Anyone want to explain to me why they dislike that one? It looks obviously good to me?
Depends on what exactly on what one means by “experts”, but at least historically expert opinion as defined to mean “experts in AI” seems to me to have performed pretty terribly. They mostly dismissed AGI happening, had timelines that seemed often transparently absurd, and their predictions were extremely framing dependent (the central result from the AI Impacts expert surveys is IMO that experts give timelines that differ by 20 years if you just slightly change the wording of how you are eliciting their probabilities).
Like, 5 years ago you could construct compelling arguments of near expert consensus against risks from AI. So clearly arguments today can’t be that much more robust, unless you have a specific story for why expert beliefs are now a lot smarter.
Sure, but still experts could not agree that AI is quite risky, and they do. This is important evidence in favour, especially to the extent they aren’t your ingroup.
I’m not saying people should consider it a top argument, but I’m surprised how it falls on the ranking.
.
Agree I could have been clearer here. I was taking the premise of the expert opinion section of the post as given, which is that expert opinion is an argument in-favor of AI existential risk.
AI isn’t dangerous because of what experts think, and the arguments that persuaded the experts themselves are not “experts think this”. It would have been a misleading argument for Eliezer in 2000 being among the first people to think about it in the modern way, or for people who weren’t already rats in maybe 2017 before GPT was in the news and when AI x-risk was very niche.
I also have objections to its usefulness as an argument; “experts think this” doesn’t give me any inside view of the problem by which I can come up with novel solutions that the experts haven’t thought of. I think this especially comes up if the solutions might be precise or extreme; if I was an alignment researcher, “experts think this” would tell me nothing about what math I should be writing, and if I was a politician, “experts think this” would be less likely to get me to come up with solutions that I think would work rather than solutions that are compromising between the experts coalition and my other constituents.
So, while it is evidence (experts aren’t anticorrelated with the truth), there’s better reasoning available that’s more entangled with the truth and gives more precise answers.
Most such ‘experts’ have knowledge and metis in how to do engineering with machine learning, not in predicting the outcomes of future scientific insights that may or may not happen, especially when asked about questions like ‘is this research going to cause an event whose measured impacts will be larger in scope than the industrial revolution’. I don’t believe that there are relevant experts, nor that I should straightforwardly defer to the body of people with a related-but-distinct topic of expertise.
Often there are many epistemic corruptions within large institutions of people; they can easily be borked in substantive ways that sometimes allow me to believe that they’re mistaken and untrustworthy on some question. I am not saying this is true for all questions, but my sense is that most ML people are operating in far mode when thinking about the future of AGI and that a lot of the strings and floats they output when prompted are not very related to reality.
Expert opinion is an argument for people who are not themselves particularly informed about the topic. For everyone else, it basically turns into an authority fallacy.