As usual, it pays to extrapolate from the past actuarial data, which should be available somewhere. My guess is that we are approaching saturation, barring extreme breakthroughs in longevity, which I personally find quite unlikely this century. I am also skeptical about AGI in 2075. If you asked experts in 1970 about Moon bases, they would expect it before 2000 with 90%+ confidence. And we already had the technology then. Instead the real progress was in a completely unexpected area. I suspect that there will be some breakthroughs in the next 50 years, but they will come as a surprise.
The reason we didn’t build a moon base is because it’s not immediately useful and costs insane amounts of money.
Meanwhile, GPT3 was made with a couple million dollars and could probably replace half the clickbait sites out there. AI has a much smoother incentive gradient than space tech.
Except for the long AI winter where most AI research produced very little value. Just because we’ve broken through one constraint and started on another S-curve does not mean we will not hit another constraint.
As usual, it pays to extrapolate from the past actuarial data, which should be available somewhere. My guess is that we are approaching saturation, barring extreme breakthroughs in longevity, which I personally find quite unlikely this century. I am also skeptical about AGI in 2075. If you asked experts in 1970 about Moon bases, they would expect it before 2000 with 90%+ confidence. And we already had the technology then. Instead the real progress was in a completely unexpected area. I suspect that there will be some breakthroughs in the next 50 years, but they will come as a surprise.
The reason we didn’t build a moon base is because it’s not immediately useful and costs insane amounts of money.
Meanwhile, GPT3 was made with a couple million dollars and could probably replace half the clickbait sites out there. AI has a much smoother incentive gradient than space tech.
Except for the long AI winter where most AI research produced very little value. Just because we’ve broken through one constraint and started on another S-curve does not mean we will not hit another constraint.