I’m pretty sure Ajeya’s report significantly overestimated the mean time to AGI. I think it did a nice job of coming up with reasonable upper bounds on the longest we might have, but not a good job at estimating the lower bound or the distribution of probability mass between the bounds. I believe that the lower bound is, from a compute & data perspective, in the past already. As in, we are only algorithm bound, not compute bound. More compute can compensate for algorithmic advances, so either more compute or better algorithms can lead to AGI.
I’m pretty sure Ajeya’s report significantly overestimated the mean time to AGI. I think it did a nice job of coming up with reasonable upper bounds on the longest we might have, but not a good job at estimating the lower bound or the distribution of probability mass between the bounds. I believe that the lower bound is, from a compute & data perspective, in the past already. As in, we are only algorithm bound, not compute bound. More compute can compensate for algorithmic advances, so either more compute or better algorithms can lead to AGI.
And both at once lead to AGI even sooner.
Yes. Here’s my current view on the strategic landscape of AGI development: https://www.lesswrong.com/posts/GxzEnkSFL5DnQEAsZ/paulfchristiano-s-shortform?commentId=hEQL7rzDedGWhFQye