The median respondent’s probability of x-risk from humans failing to control AI1 was 10%, weirdly more than median chance of human extinction from AI in general2, at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%.
This absolutely reeks of a pretty common question wording problem, namely that a fairly small proportion of AI workers have ever cognitively processed the concept that very smart AI would be difficult to control (or at least, they have never processed that concept for the 10-20 seconds they would need to in order to slap a probability on it).
This absolutely reeks of a pretty common question wording problem, namely that a fairly small proportion of AI workers have ever cognitively processed the concept that very smart AI would be difficult to control (or at least, they have never processed that concept for the 10-20 seconds they would need to in order to slap a probability on it).
This lack of awareness of the obvious, if true, bodes ill for the future of humanity.