Given that all the forecasts seem to be wrong in the “things happened faster than we expected” direction, we should probably expect HLAI to happen faster than expected as well.
I don’t think we should update too strongly on these few data points; e.g. a previous analysis of Metaculus’ AI predictions found “weak evidence to suggest the community expected more AI progress than actually occurred, but this was not conclusive”. MATH and MMLU feel more relevant than the average Metaculus AI prediction but not enough to strongly outweigh the previous findings.
It also seems like we should retreat more to outside views about general rates of technological progress, rather than forming a specific inside view (since the inside view seems to mostly end up being wrong)
I think a pure outside view would give a median of something like 35 years in my opinion (based on my very sketchy attempt of forming a dataset of when technical grant challenges were solved), and then ML progress seems to be happening quite quickly, so you should probably adjust down from that.
I’d be interested to check out that dataset! Hard for me to react too much to the strategy without more details, but outside-view-ish reasoning about predicting things far-ish in the future that we don’t know much about (and as you say, have often been wrong on the inside view) seems generally reasonable to me.
Actually pretty interested how you get to medians of 40 years, that seems longer than I’d predict without looking at any field-specific facts about ML, and then the field-specific facts mostly push towards shorter timelines.
I mentioned in the post that my median is now ~2050 which is 28 years out; as for how I formed my forecast, I originally roughly start with Ajeya’s report, added some uncertainty and had previously shifted further out due to intuitions I had about data/environment bottlenecks, unknown unknowns, etc. I still have lots of uncertainty but my median has moved sooner to 2050 due to MATH forcing me to adjust my intuitions some, reflections on my hesitations against short-ish timelines, and Daniel Kokotajlo’s work.
I don’t think we should update too strongly on these few data points; e.g. a previous analysis of Metaculus’ AI predictions found “weak evidence to suggest the community expected more AI progress than actually occurred, but this was not conclusive”. MATH and MMLU feel more relevant than the average Metaculus AI prediction but not enough to strongly outweigh the previous findings.
I’d be interested to check out that dataset! Hard for me to react too much to the strategy without more details, but outside-view-ish reasoning about predicting things far-ish in the future that we don’t know much about (and as you say, have often been wrong on the inside view) seems generally reasonable to me.
I mentioned in the post that my median is now ~2050 which is 28 years out; as for how I formed my forecast, I originally roughly start with Ajeya’s report, added some uncertainty and had previously shifted further out due to intuitions I had about data/environment bottlenecks, unknown unknowns, etc. I still have lots of uncertainty but my median has moved sooner to 2050 due to MATH forcing me to adjust my intuitions some, reflections on my hesitations against short-ish timelines, and Daniel Kokotajlo’s work.