You’ve now linked to the same survey twice in difference discussions of this topic, even though this survey, as far as I can tell, provides no evidence of the position you are trying to argue for. To copy Thomas Kwa’s response to your previous comment:
I don’t see anything in the linked survey about a consensus view on total existential risk probability from AGI. The survey asked researchers to compare between different existential catastrophe scenarios, not about their total x-risk probability, and surely not about the probability of x-risk if AGI were developed now without further alignment research.
We asked researchers to estimate the probability of five AI risk scenarios, conditional on an existential catastrophe due to AI having occurred. There was also a catch-all “other scenarios” option.
[...]
Most of this community’s discussion about existential risk from AI focuses on scenarios involving one or more powerful, misaligned AI systems that take control of the future. This kind of concern is articulated most prominently in “Superintelligence” and “What failure looks like”, corresponding to three scenarios in our survey (the “Superintelligence” scenario, part 1 and part 2 of “What failure looks like”). The median respondent’s total (conditional) probability on these three scenarios was 50%, suggesting that this kind of concern about AI risk is still prevalent, but far from the only kind of risk that researchers are concerned about today.
It also seems straightforwardly wrong that it’s just Eliezer and some MIRI people. While there is a wide variance in opinions on probability of doom from people working in AI Alignment, there are many people at Redwood, OpenAI and other organizations who assign very high probability here. I don’t think it’s at all accurate to say this fits neatly along organizational boundaries, nor is it at all accurate to say that this is “only” a small group of people. My current best guess is if we surveyed people working full-time on x-risk motivated AI Alignment, about 35% of people would assign a probability of doom above 80%.
Whoops, you’re right that I linked the wrong survey. I see others posted the link to Rob’s survey (done in response to some previous similar claims) and I edited my comment to fix the link.
I think you can identify a cluster of near certain doom views, e.g. ‘logistic success curve’ and odds of success being on the order of magnitude of 1% (vs 10%, or 90%) based around MIRI/Eliezer, with a lot of epistemic deference involved (visible on LW). I would say it is largely attributable there and without sufficient support.
”My current best guess is if we surveyed people working full-time on x-risk motivated AI Alignment, about 35% of people would assign a probability of doom above 80%.”
My current best guess is if we surveyed people working full-time on x-risk motivated AI Alignment, about 35% of people would assign a probability of doom above 80%.
Depending on how you choose the survey population, I would bet that it’s fewer than 35%, at 2:1 odds.
(Though perhaps you’ve already updated against based on Rob’s survey results below; that survey happened because I offered to bet against a similar claim of doom probabilities from Rob, that I would have won if we had made the bet.)
I’d just say the numbers from the survey below? Maybe slightly updated towards doom; I think probably some of the respondents have been influenced by recent wave of doomism.
If you had a more rigorously defined population, such that I could predict the differences between that population and the population surveyed below, I could predict more differences.
My current best guess is if we surveyed people working full-time on x-risk motivated AI Alignment, about 35% of people would assign a probability of doom above 80%.
Not what you were asking for (time has passed, the Q is different, and the survey population is different too), but in my early 2021 survey of people who “[research] long-term AI topics, or who [have] done a lot of past work on such topics” at a half-dozen orgs, 3⁄27 ≈ 11% of those who marked “I’m doing (or have done) a lot of technical AI safety research.” gave an answer above 80% to at least one of my attempts to operationalize ‘x-risk from AI’. (And at least two of those three were MIRI people.)
The weaker claim “risk (on at least one of the operationalizations) is at least 80%” got agreement from 5⁄27 ≈ 19%, and “risk (on at least one of the operationalizations) is at least 66%” got agreement from 9⁄27 ≈ 33%.
You’ve now linked to the same survey twice in difference discussions of this topic, even though this survey, as far as I can tell, provides no evidence of the position you are trying to argue for. To copy Thomas Kwa’s response to your previous comment:
It also seems straightforwardly wrong that it’s just Eliezer and some MIRI people. While there is a wide variance in opinions on probability of doom from people working in AI Alignment, there are many people at Redwood, OpenAI and other organizations who assign very high probability here. I don’t think it’s at all accurate to say this fits neatly along organizational boundaries, nor is it at all accurate to say that this is “only” a small group of people. My current best guess is if we surveyed people working full-time on x-risk motivated AI Alignment, about 35% of people would assign a probability of doom above 80%.
Whoops, you’re right that I linked the wrong survey. I see others posted the link to Rob’s survey (done in response to some previous similar claims) and I edited my comment to fix the link.
I think you can identify a cluster of near certain doom views, e.g. ‘logistic success curve’ and odds of success being on the order of magnitude of 1% (vs 10%, or 90%) based around MIRI/Eliezer, with a lot of epistemic deference involved (visible on LW). I would say it is largely attributable there and without sufficient support.
”My current best guess is if we surveyed people working full-time on x-risk motivated AI Alignment, about 35% of people would assign a probability of doom above 80%.”
What do you make of Rob’s survey results (correct link this time)?
Depending on how you choose the survey population, I would bet that it’s fewer than 35%, at 2:1 odds.
(Though perhaps you’ve already updated against based on Rob’s survey results below; that survey happened because I offered to bet against a similar claim of doom probabilities from Rob, that I would have won if we had made the bet.)
Where would you put the numbers, roughly?
I’d just say the numbers from the survey below? Maybe slightly updated towards doom; I think probably some of the respondents have been influenced by recent wave of doomism.
If you had a more rigorously defined population, such that I could predict the differences between that population and the population surveyed below, I could predict more differences.
Not what you were asking for (time has passed, the Q is different, and the survey population is different too), but in my early 2021 survey of people who “[research] long-term AI topics, or who [have] done a lot of past work on such topics” at a half-dozen orgs, 3⁄27 ≈ 11% of those who marked “I’m doing (or have done) a lot of technical AI safety research.” gave an answer above 80% to at least one of my attempts to operationalize ‘x-risk from AI’. (And at least two of those three were MIRI people.)
The weaker claim “risk (on at least one of the operationalizations) is at least 80%” got agreement from 5⁄27 ≈ 19%, and “risk (on at least one of the operationalizations) is at least 66%” got agreement from 9⁄27 ≈ 33%.