There’s no consensus among MIRI researchers on how long timelines are, and our aggregated estimate puts medium-to-high probability on scenarios in which the research community hasn’t developed AGI by, e.g., 2035. On average, however, research staff now assign moderately higher probability to AGI’s being developed before 2035 than we did a year or two ago.
I don’t think the individual estimates that made up the aggregate were ever published. Perhaps someone at MIRI can help us out, it would help build a forecasting track record for those involved.
For Yudkowsky in particular, I have a small collection of sources to hand. In Biology-Inspired AGI Timelines (2021-12-01), he wrote:
But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050.
I could be wrong, but my guess is that we do not get AGI just by scaling ChatGPT, and that it takes surprisingly long from here. Parents conceiving today may have a fair chance of their child living to see kindergarten.
When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
Yudkowsky also has a track record betting on Manifold that AI will wipe out humanity by 2030, at up to 40%.
Putting these together:
2021: median well before 2050
2022: “fair chance” when a 2023 baby goes to kindergarten (Sep 2028 or 2029)
2023: before a young child grows up (about 2035)
40% P(Doom by 2030)
So a median of 2029, with very wide credible intervals around both sides. This is just an estimate based on his outward behavior.
Would Yudkowsky describe this as “Yudkowsky’s doctrine of AGI in 2029”?
There was a specific bet, which Yudkowsky is likely about to win. https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer
The IMO Challenge Bet was on a related topic, but not directly comparable to Bio Anchors. From MIRI’s 2017 Updates and Strategy:
I don’t think the individual estimates that made up the aggregate were ever published. Perhaps someone at MIRI can help us out, it would help build a forecasting track record for those involved.
For Yudkowsky in particular, I have a small collection of sources to hand. In Biology-Inspired AGI Timelines (2021-12-01), he wrote:
On Twitter (2022-12-02):
Also, in Shut it all down (March 2023):
Yudkowsky also has a track record betting on Manifold that AI will wipe out humanity by 2030, at up to 40%.
Putting these together:
2021: median well before 2050
2022: “fair chance” when a 2023 baby goes to kindergarten (Sep 2028 or 2029)
2023: before a young child grows up (about 2035)
40% P(Doom by 2030)
So a median of 2029, with very wide credible intervals around both sides. This is just an estimate based on his outward behavior.
Would Yudkowsky describe this as “Yudkowsky’s doctrine of AGI in 2029”?
Paul is not Ajeya, and also Eliezer only gets one bit from this win, which I think is insufficient grounds for behaving like such an asshole.
Buying at 12% and selling at 84% gets you 2.8 bits.
Edit: Hmm, that’s if he stakes all his cred, by Kelly he only stakes some of it so you’re right, it probably comes out to about 1 bit.