Your first post seems to be only 4 years ago? (if you expressed these views a decade ago in comments I don’t see an easy way to find those currently) I was posting about short timelines 8 years ago, but from what I recall a decade ago timelines were longer, DL was not recognized as the path to AGI, etc—but yes I guess most of the points you mention were covered in Bostrom’s SI 9 years ago and were similar to MIRI/LW views around then.
Nonetheless MIRI/LW did make implicit predictions and bets on the path to AGI that turned out to be mostly wrong, and this does suggest poor epistemics/models around AGI and alignment.
If you strongly hold a particular theory about how intelligence works and it turns out to be mostly wrong, this will necessarily undercut many dependent arguments/beliefs—in this case those concerning alignment risks and strategies.
Ah, I shouldn’t have said “myself,” sorry. Got carried away there. :( I first got involved in all this stuff about ten years ago when I read LW, Bostrom, etc. and found myself convinced. I said so to people around me, but I didn’t post about it online.
I disagree that the bets turned out to be mostly wrong, and that this suggests poor epistemics (though it does suggest poor models of AGI, and to a lesser extent, models of alignment). Thanks for the link though, I’ll ponder it for a bit.
Yud, Bostrom, myself, Gwern, … it was pretty much the standard view on LW?
Moravec and Kurzweil definitely deserve credit for their forecasts, even more than Yudkowsky I’d say.
Your first post seems to be only 4 years ago? (if you expressed these views a decade ago in comments I don’t see an easy way to find those currently) I was posting about short timelines 8 years ago, but from what I recall a decade ago timelines were longer, DL was not recognized as the path to AGI, etc—but yes I guess most of the points you mention were covered in Bostrom’s SI 9 years ago and were similar to MIRI/LW views around then.
Nonetheless MIRI/LW did make implicit predictions and bets on the path to AGI that turned out to be mostly wrong, and this does suggest poor epistemics/models around AGI and alignment.
If you strongly hold a particular theory about how intelligence works and it turns out to be mostly wrong, this will necessarily undercut many dependent arguments/beliefs—in this case those concerning alignment risks and strategies.
Ah, I shouldn’t have said “myself,” sorry. Got carried away there. :( I first got involved in all this stuff about ten years ago when I read LW, Bostrom, etc. and found myself convinced. I said so to people around me, but I didn’t post about it online.
I disagree that the bets turned out to be mostly wrong, and that this suggests poor epistemics (though it does suggest poor models of AGI, and to a lesser extent, models of alignment). Thanks for the link though, I’ll ponder it for a bit.