Rohin’s opinion: I enjoyed this post; it gave me a visceral sense for what hyperbolic models with noise look like (see the blog post for this, the summary doesn’t capture it). Overall, I think my takeaway is that the picture used in AI risk of explosive growth is in fact plausible, despite how crazy it initially sounds.
One thing this post led me to consider is that when we bring together various fields, the evidence for ‘things will go insane in the next century’ is stronger than any specific claim about (for example) AI takeoff. What is the other evidence?
We’re probably alone in the universe, and anthropic arguments tend to imply we’re living at an incredibly unusual time in history. Isn’t that what you’d expect to see in the same world where there is a totally plausible mechanism that could carry us a long way up this line, in the form of AGI and eternity in six hours? All the pieces are already there, and they only need to be approximately right for our lifetimes to be far weirder than those of people who were e.g. born in 1896 and lived to 1947 - which was weird enough, but that should be your minimum expectation.
In general, there are three categories of evidence that things are likely to become very weird over the next century, or that we live at the hinge of history:
Specific mechanisms around AGI—possibility of rapid capability gain, and arguments from exploratory engineering
Economic and technological trend-fitting predicting explosive growth in the next century
Anthropic and Fermi arguments suggesting that we live at some extremely unusual time
All of these are evidence for such a claim. 1) is because a superintelligent AGI takeoff is just a specific example for how the hinge occurs. 3) is already directly arguing for that, but how does 2) fit in with 1) and 3)?
There is something a little strange about calling a fast takeoff from AGI and whatever was driving superexponential growth throughout all history the same trend—there is some huge cosmic coincidence that causes there to always be superexponential growth—so as soon as population growth + growth in wealth per capita or whatever was driving it until now runs out in the great stagnation (which is visible as a tiny blip on the RHS of the double-log plot), AGI takes over and pushes us up the same trend line. That’s clearly not possible, so there would have to be some factor responsible for both if AGI is what takes us up the rest of that trend line—a factor that was at work in the founding of Jericho but predestined that AGI would be invented and cause explosive growth in the 21st century, rather than the 19th or the 23rd.
For AGI to be the driver of the rest of that growth curve, there has to be a single causal mechanism that keeps us on the same trend and includes AGI as its final step—if we say we are agnostic about what that mechanism is, we can still call 2) evidence for us living at the hinge point, though we have to note that there is a huge blank spot in need of explanation. Is there anything that can fill it to complete the picture?
The mechanism proposed in the article seems like it could plausibly include AGI.
If technology is responsible for the growth rate, then reinvesting production in technology will cause the growth rate to be faster. I’d be curious to see data on what fraction of GWP gets reinvested in improved technology and how that lines up with the other trends.
But even though the drivers seem superficially similar—they are both about technology, the claim is that one very specific technology will generate explosive growth, not that technology in general will—it seems strange that AGI would follow the same growth curve caused by reinvesting more GWP in improving ordinary technology which doesn’t improve your own ability to think in the same way that AGI would.
As for precise timings, the great stagnation (last 30ish years) just seems like it would stretch out the timeline a bit, so we shouldn’t take the 2050s seriously—as much as the last 70 years work on an exponential trend line there’s really no way to make it fit overall as that post makes clear.
Modelling the Human Trajectory or ‘How I learned to stop worrying and love Hegel’.
One thing this post led me to consider is that when we bring together various fields, the evidence for ‘things will go insane in the next century’ is stronger than any specific claim about (for example) AI takeoff. What is the other evidence?
We’re probably alone in the universe, and anthropic arguments tend to imply we’re living at an incredibly unusual time in history. Isn’t that what you’d expect to see in the same world where there is a totally plausible mechanism that could carry us a long way up this line, in the form of AGI and eternity in six hours? All the pieces are already there, and they only need to be approximately right for our lifetimes to be far weirder than those of people who were e.g. born in 1896 and lived to 1947 - which was weird enough, but that should be your minimum expectation.
In general, there are three categories of evidence that things are likely to become very weird over the next century, or that we live at the hinge of history:
Specific mechanisms around AGI—possibility of rapid capability gain, and arguments from exploratory engineering
Economic and technological trend-fitting predicting explosive growth in the next century
Anthropic and Fermi arguments suggesting that we live at some extremely unusual time
All of these are evidence for such a claim. 1) is because a superintelligent AGI takeoff is just a specific example for how the hinge occurs. 3) is already directly arguing for that, but how does 2) fit in with 1) and 3)?
There is something a little strange about calling a fast takeoff from AGI and whatever was driving superexponential growth throughout all history the same trend—there is some huge cosmic coincidence that causes there to always be superexponential growth—so as soon as population growth + growth in wealth per capita or whatever was driving it until now runs out in the great stagnation (which is visible as a tiny blip on the RHS of the double-log plot), AGI takes over and pushes us up the same trend line. That’s clearly not possible, so there would have to be some factor responsible for both if AGI is what takes us up the rest of that trend line—a factor that was at work in the founding of Jericho but predestined that AGI would be invented and cause explosive growth in the 21st century, rather than the 19th or the 23rd.
For AGI to be the driver of the rest of that growth curve, there has to be a single causal mechanism that keeps us on the same trend and includes AGI as its final step—if we say we are agnostic about what that mechanism is, we can still call 2) evidence for us living at the hinge point, though we have to note that there is a huge blank spot in need of explanation. Is there anything that can fill it to complete the picture?
The mechanism proposed in the article seems like it could plausibly include AGI.
But even though the drivers seem superficially similar—they are both about technology, the claim is that one very specific technology will generate explosive growth, not that technology in general will—it seems strange that AGI would follow the same growth curve caused by reinvesting more GWP in improving ordinary technology which doesn’t improve your own ability to think in the same way that AGI would.
As for precise timings, the great stagnation (last 30ish years) just seems like it would stretch out the timeline a bit, so we shouldn’t take the 2050s seriously—as much as the last 70 years work on an exponential trend line there’s really no way to make it fit overall as that post makes clear.