In the worlds where we get AGI in the next 3y, the money can (and large chunks of it will) get donated, partly to GiveDirectly and suchlike, and partly to stuff that helps AGI go better.
The remaining 50% basically exponentially decays for a bit and then has a big fat tail. So off the top of my head I’m thinking something like this:
I’d put more probability in the scenario where good $5 billion 1e27 FLOPs runs give mediocre results, so that more scaling remains feasible but lacks an expectation of success. With how expensive the larger experiments would be, it could take many years for someone to take another draw from the apocalypse deck. That alone adds maybe 2% for 10 years after 2026 or so, and there are other ways for AGI to start working.
Yes, I really do. I’m afraid I can’t talk about all of the reasons for this (I work at OpenAI) but mostly it should be figure-outable from publicly available information. My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I’ve learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.
As for the 15% − 15% thing: I mean I don’t feel confident that those are the right numbers; rather, those numbers express my current state of uncertainty. I could see the case for making the 2024 number higher than the 2025 (exponential distribution vibes, ‘if it doesn’t work now then that’s evidence it won’t work next year either’ vibes.) I could also see the case for making the 2025 number higher (it seems like it’ll happen this year, but in general projects usually take twice as long as one expects due to the planning fallacy, therefore it’ll probably happen next year)
Any increase in scale is some chance of AGI at this point, since unlike weaker models, GPT-4 is not stupid in a clear way, it might be just below the threshold of scale to enable an LLM to get its act together. This gives some 2024 probability.
More likely, a larger model “merely” makes job-level agency feasible for relatively routine human jobs, but that alone would suddenly make $50-$500 billion runs financially reasonable. Given the premise of job-level agency at <$5 billion scale, the larger runs likely suffice for AGI. The Gemini report says training took place in multiple datacenters, which suggests that this sort of scaling might already be feasible, except for the risk that it produces something insufficiently commercially useful to justify the cost (and waiting improves the prospects). So this might all happen as early as 2025 or 2026.
How soon? I expect to need the money sometime in the next 3 years, because that’s about when we get to 50% chance of AGI.
In your 50% of worlds where we get AGI in the next 3y, do you have important uses for the money?
How does your remaining 50% smear across “soon but >3y” through “AI fizzle”?
In the worlds where we get AGI in the next 3y, the money can (and large chunks of it will) get donated, partly to GiveDirectly and suchlike, and partly to stuff that helps AGI go better.
The remaining 50% basically exponentially decays for a bit and then has a big fat tail. So off the top of my head I’m thinking something like this:
15% − 2024
15% − 2025
15% − 2026
10% − 2027
5% − 2028
5% − 2029
3% − 2030
2% − 2031
2% − 2032
2% − 2033
2% − 2034
2% − 2035
… you get the idea.
I’d put more probability in the scenario where good $5 billion 1e27 FLOPs runs give mediocre results, so that more scaling remains feasible but lacks an expectation of success. With how expensive the larger experiments would be, it could take many years for someone to take another draw from the apocalypse deck. That alone adds maybe 2% for 10 years after 2026 or so, and there are other ways for AGI to start working.
Why do you have 15% for 2024 and only an additional 15 for 2025.
Do you really think there’s a 15% chance of AGI this year ?
Yes, I really do. I’m afraid I can’t talk about all of the reasons for this (I work at OpenAI) but mostly it should be figure-outable from publicly available information. My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I’ve learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.
As for the 15% − 15% thing: I mean I don’t feel confident that those are the right numbers; rather, those numbers express my current state of uncertainty. I could see the case for making the 2024 number higher than the 2025 (exponential distribution vibes, ‘if it doesn’t work now then that’s evidence it won’t work next year either’ vibes.) I could also see the case for making the 2025 number higher (it seems like it’ll happen this year, but in general projects usually take twice as long as one expects due to the planning fallacy, therefore it’ll probably happen next year)
Any increase in scale is some chance of AGI at this point, since unlike weaker models, GPT-4 is not stupid in a clear way, it might be just below the threshold of scale to enable an LLM to get its act together. This gives some 2024 probability.
More likely, a larger model “merely” makes job-level agency feasible for relatively routine human jobs, but that alone would suddenly make $50-$500 billion runs financially reasonable. Given the premise of job-level agency at <$5 billion scale, the larger runs likely suffice for AGI. The Gemini report says training took place in multiple datacenters, which suggests that this sort of scaling might already be feasible, except for the risk that it produces something insufficiently commercially useful to justify the cost (and waiting improves the prospects). So this might all happen as early as 2025 or 2026.
I mean, does your Vanguard targeted lifecycle index fund likely invest in equities exposed to AGI growth (conditional on non-doom)?
If you think money still has meaning after AGI and meaningful chance of no-doom, it might actually be optimal to invest in your retirement fund.