My current best guess is if we surveyed people working full-time on x-risk motivated AI Alignment, about 35% of people would assign a probability of doom above 80%.
Not what you were asking for (time has passed, the Q is different, and the survey population is different too), but in my early 2021 survey of people who “[research] long-term AI topics, or who [have] done a lot of past work on such topics” at a half-dozen orgs, 3⁄27 ≈ 11% of those who marked “I’m doing (or have done) a lot of technical AI safety research.” gave an answer above 80% to at least one of my attempts to operationalize ‘x-risk from AI’. (And at least two of those three were MIRI people.)
The weaker claim “risk (on at least one of the operationalizations) is at least 80%” got agreement from 5⁄27 ≈ 19%, and “risk (on at least one of the operationalizations) is at least 66%” got agreement from 9⁄27 ≈ 33%.
Not what you were asking for (time has passed, the Q is different, and the survey population is different too), but in my early 2021 survey of people who “[research] long-term AI topics, or who [have] done a lot of past work on such topics” at a half-dozen orgs, 3⁄27 ≈ 11% of those who marked “I’m doing (or have done) a lot of technical AI safety research.” gave an answer above 80% to at least one of my attempts to operationalize ‘x-risk from AI’. (And at least two of those three were MIRI people.)
The weaker claim “risk (on at least one of the operationalizations) is at least 80%” got agreement from 5⁄27 ≈ 19%, and “risk (on at least one of the operationalizations) is at least 66%” got agreement from 9⁄27 ≈ 33%.