This is why I buy the scaling thesis mostly, and the only real crux is whether @Bogdan Ionut Cirstea or @jacob_cannell is right around timelines.
I do believe some algorithmic improvements matter, but I don’t think they will be nearly as much of a blocker as raw compute, and my pessimistic estimate is that the critical algorithms could be discovered in 24-36 months, assuming we don’t have them.
(I’ll note that my timeline is both quite uncertain and potentially unstable—so I’m not sure how different it is from Jacob’s, everything considered; but yup, that’s roughly my model.)
This is why I buy the scaling thesis mostly, and the only real crux is whether @Bogdan Ionut Cirstea or @jacob_cannell is right around timelines.
I do believe some algorithmic improvements matter, but I don’t think they will be nearly as much of a blocker as raw compute, and my pessimistic estimate is that the critical algorithms could be discovered in 24-36 months, assuming we don’t have them.
@jacob_cannell’s timeline and model is here:
https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long
@Bogdan Ionut Cirstea’s timeline and models are here:
https://x.com/BogdanIonutCir2/status/1827707367154209044
https://x.com/BogdanIonutCir2/status/1826214776424251462
https://x.com/BogdanIonutCir2/status/1826032534863622315
(I’ll note that my timeline is both quite uncertain and potentially unstable—so I’m not sure how different it is from Jacob’s, everything considered; but yup, that’s roughly my model.)