The main forecasters who first made specific predictions like that are Morevac and later Kurzweil, who both predicted AGI in our lifetimes (moravec: 2020′s ish, kurzweil 2029 ish) and more or less predicted it would be agentic, unified, and a BIG FUCKING DEAL. Those predictions were first made in the 90′s.
Young EY seemed just a tad overexcited about the future and made various poorly calibrated predictions, perhaps this is why he seemed to got to some lengths to avoid any specific predictions as a later adult. (and even recently EY still seems to have unjustified nanotech excitement)
Your first post seems to be only 4 years ago? (if you expressed these views a decade ago in comments I don’t see an easy way to find those currently) I was posting about short timelines 8 years ago, but from what I recall a decade ago timelines were longer, DL was not recognized as the path to AGI, etc—but yes I guess most of the points you mention were covered in Bostrom’s SI 9 years ago and were similar to MIRI/LW views around then.
Nonetheless MIRI/LW did make implicit predictions and bets on the path to AGI that turned out to be mostly wrong, and this does suggest poor epistemics/models around AGI and alignment.
If you strongly hold a particular theory about how intelligence works and it turns out to be mostly wrong, this will necessarily undercut many dependent arguments/beliefs—in this case those concerning alignment risks and strategies.
Ah, I shouldn’t have said “myself,” sorry. Got carried away there. :( I first got involved in all this stuff about ten years ago when I read LW, Bostrom, etc. and found myself convinced. I said so to people around me, but I didn’t post about it online.
I disagree that the bets turned out to be mostly wrong, and that this suggests poor epistemics (though it does suggest poor models of AGI, and to a lesser extent, models of alignment). Thanks for the link though, I’ll ponder it for a bit.
Who made these predictions about a decade ago?
The main forecasters who first made specific predictions like that are Morevac and later Kurzweil, who both predicted AGI in our lifetimes (moravec: 2020′s ish, kurzweil 2029 ish) and more or less predicted it would be agentic, unified, and a BIG FUCKING DEAL. Those predictions were first made in the 90′s.
Young EY seemed just a tad overexcited about the future and made various poorly calibrated predictions, perhaps this is why he seemed to got to some lengths to avoid any specific predictions as a later adult. (and even recently EY still seems to have unjustified nanotech excitement)
Yud, Bostrom, myself, Gwern, … it was pretty much the standard view on LW?
Moravec and Kurzweil definitely deserve credit for their forecasts, even more than Yudkowsky I’d say.
Your first post seems to be only 4 years ago? (if you expressed these views a decade ago in comments I don’t see an easy way to find those currently) I was posting about short timelines 8 years ago, but from what I recall a decade ago timelines were longer, DL was not recognized as the path to AGI, etc—but yes I guess most of the points you mention were covered in Bostrom’s SI 9 years ago and were similar to MIRI/LW views around then.
Nonetheless MIRI/LW did make implicit predictions and bets on the path to AGI that turned out to be mostly wrong, and this does suggest poor epistemics/models around AGI and alignment.
If you strongly hold a particular theory about how intelligence works and it turns out to be mostly wrong, this will necessarily undercut many dependent arguments/beliefs—in this case those concerning alignment risks and strategies.
Ah, I shouldn’t have said “myself,” sorry. Got carried away there. :( I first got involved in all this stuff about ten years ago when I read LW, Bostrom, etc. and found myself convinced. I said so to people around me, but I didn’t post about it online.
I disagree that the bets turned out to be mostly wrong, and that this suggests poor epistemics (though it does suggest poor models of AGI, and to a lesser extent, models of alignment). Thanks for the link though, I’ll ponder it for a bit.