The problem is that accepting this argument involves ignoring how AI keeps on blitzing past supposed barrier after barrier. At some point, a rational observer needs to be willing to accept that their max likelihood model is wrong and consider other possible ways the world could be instead.
There are also many ways the max likelihood model could be consistent with very rapid near-term change, too.
One is that, like in past transitions, the faster growth isn’t an exponential, it gets faster and then eventually peters out, like any s-curve. If look at the world from 1700 to now, the industrial revolution is a sum of many individual such curves, but even so, the fastest years/decades of growth globally were ~50x faster than the slowest. If you shorten 1000x growth down to a couple of decades and assume a similar distribution of growth rates, then it matters a whole lot whether 2024 is year 1, year 5, or what. We could be 7 years into a two decade transition that began with transformer architecture, or two decades into a fifty year transition that started with some other machine learning advance, and those would be consistent with both the OP and “Things are about to move ridiculously fast.”
In other words: Sustained faster-than-population economic growth didn’t show up in Britain until a century or so into the industrial revolution began, peak global growth was a century or so after that, and in recent years the largest remaining countries have been catching up even faster than that even while growth in the UK and US and EU are slower than past peaks. If this were transitional year 7 of 20, and peak growth in the industrial revolution was 5-10%/yr, and this transition is 10x faster, than it’s plausible to expect 1 year economic doubling times in each of several years between now and the early 2030s.
The OP seems to assume we’re in year 1 or so out of 20-50, and that the most significant or fastest changes will happen near the end of that window. I’m not quite sure why I should agree with those assumptions.
The problem is that accepting this argument involves ignoring how AI keeps on blitzing past supposed barrier after barrier. At some point, a rational observer needs to be willing to accept that their max likelihood model is wrong and consider other possible ways the world could be instead.
There are also many ways the max likelihood model could be consistent with very rapid near-term change, too.
One is that, like in past transitions, the faster growth isn’t an exponential, it gets faster and then eventually peters out, like any s-curve. If look at the world from 1700 to now, the industrial revolution is a sum of many individual such curves, but even so, the fastest years/decades of growth globally were ~50x faster than the slowest. If you shorten 1000x growth down to a couple of decades and assume a similar distribution of growth rates, then it matters a whole lot whether 2024 is year 1, year 5, or what. We could be 7 years into a two decade transition that began with transformer architecture, or two decades into a fifty year transition that started with some other machine learning advance, and those would be consistent with both the OP and “Things are about to move ridiculously fast.”
In other words: Sustained faster-than-population economic growth didn’t show up in Britain until a century or so into the industrial revolution began, peak global growth was a century or so after that, and in recent years the largest remaining countries have been catching up even faster than that even while growth in the UK and US and EU are slower than past peaks. If this were transitional year 7 of 20, and peak growth in the industrial revolution was 5-10%/yr, and this transition is 10x faster, than it’s plausible to expect 1 year economic doubling times in each of several years between now and the early 2030s.
The OP seems to assume we’re in year 1 or so out of 20-50, and that the most significant or fastest changes will happen near the end of that window. I’m not quite sure why I should agree with those assumptions.