Yes, I really do. I’m afraid I can’t talk about all of the reasons for this (I work at OpenAI) but mostly it should be figure-outable from publicly available information. My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I’ve learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.
As for the 15% − 15% thing: I mean I don’t feel confident that those are the right numbers; rather, those numbers express my current state of uncertainty. I could see the case for making the 2024 number higher than the 2025 (exponential distribution vibes, ‘if it doesn’t work now then that’s evidence it won’t work next year either’ vibes.) I could also see the case for making the 2025 number higher (it seems like it’ll happen this year, but in general projects usually take twice as long as one expects due to the planning fallacy, therefore it’ll probably happen next year)
Any increase in scale is some chance of AGI at this point, since unlike weaker models, GPT-4 is not stupid in a clear way, it might be just below the threshold of scale to enable an LLM to get its act together. This gives some 2024 probability.
More likely, a larger model “merely” makes job-level agency feasible for relatively routine human jobs, but that alone would suddenly make $50-$500 billion runs financially reasonable. Given the premise of job-level agency at <$5 billion scale, the larger runs likely suffice for AGI. The Gemini report says training took place in multiple datacenters, which suggests that this sort of scaling might already be feasible, except for the risk that it produces something insufficiently commercially useful to justify the cost (and waiting improves the prospects). So this might all happen as early as 2025 or 2026.
Why do you have 15% for 2024 and only an additional 15 for 2025.
Do you really think there’s a 15% chance of AGI this year ?
Yes, I really do. I’m afraid I can’t talk about all of the reasons for this (I work at OpenAI) but mostly it should be figure-outable from publicly available information. My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I’ve learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.
As for the 15% − 15% thing: I mean I don’t feel confident that those are the right numbers; rather, those numbers express my current state of uncertainty. I could see the case for making the 2024 number higher than the 2025 (exponential distribution vibes, ‘if it doesn’t work now then that’s evidence it won’t work next year either’ vibes.) I could also see the case for making the 2025 number higher (it seems like it’ll happen this year, but in general projects usually take twice as long as one expects due to the planning fallacy, therefore it’ll probably happen next year)
Any increase in scale is some chance of AGI at this point, since unlike weaker models, GPT-4 is not stupid in a clear way, it might be just below the threshold of scale to enable an LLM to get its act together. This gives some 2024 probability.
More likely, a larger model “merely” makes job-level agency feasible for relatively routine human jobs, but that alone would suddenly make $50-$500 billion runs financially reasonable. Given the premise of job-level agency at <$5 billion scale, the larger runs likely suffice for AGI. The Gemini report says training took place in multiple datacenters, which suggests that this sort of scaling might already be feasible, except for the risk that it produces something insufficiently commercially useful to justify the cost (and waiting improves the prospects). So this might all happen as early as 2025 or 2026.