“it could” is short by LW standards? News to me (a lesswrong). I would have guessed that most of us put at least 8% of the outcome distribution before 10 years.
But note they are talking about ASI, not just AGI, and before 8 years, not 10 years. (Of course it is unclear what credence the “could” corresponds to.)
Still. It is widely understood by those who I consider experts that ASI will follow shortly after AGI. AGI will appear in the context of partial automation of AI R&D, and itself will enable full automation of AI R&D, leading to an intelligence explosion.
My median estimate has been 2028 (so 5 years). I first wrote down 2028 in 2016 (so 12 years after then), and during 7 years since, I barely moved the estimate. Things roughly happened when I expected them to.
I was surprised by this, as it does seem like a quite short timeline, even by LessWrong standards.
“it could” is short by LW standards? News to me (a lesswrong). I would have guessed that most of us put at least 8% of the outcome distribution before 10 years.
But note they are talking about ASI, not just AGI, and before 8 years, not 10 years. (Of course it is unclear what credence the “could” corresponds to.)
Still. It is widely understood by those who I consider experts that ASI will follow shortly after AGI. AGI will appear in the context of partial automation of AI R&D, and itself will enable full automation of AI R&D, leading to an intelligence explosion.
The ‘four years’ they explicitly mention does seem very short to me for ASI unless they know something we don’t...
My median estimate has been 2028 (so 5 years). I first wrote down 2028 in 2016 (so 12 years after then), and during 7 years since, I barely moved the estimate. Things roughly happened when I expected them to.