As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.
As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn’t fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.
Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.