“All these complications is why I don’t believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights—and the “fold the curve in on itself” paradigm of recursion is going to amplify even small roughnesses in the trajectory.”
Wouldn’t that be a reason to say, “I don’t know what will happen”? And to disallow you from saying, “An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely”?
If you can’t make quantitative predictions, then you can’t say that the foom might take an hour or a day, but not six months.
A lower-bound (of the growth curve) analysis could be sufficient to argue the inevitability of foom.
I agree there’s a time coming when things will happen too fast for humans. But “hard takeoff”, to me, means foom without warning. If the foom doesn’t occur until the AI is smart enough to rewrite an AI textbook, that might give us years or decades of warning. If humans add and improve different cognitive skills to the AI one-by-one, that will start a more gently-sloping RSI.
“All these complications is why I don’t believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights—and the “fold the curve in on itself” paradigm of recursion is going to amplify even small roughnesses in the trajectory.”
Wouldn’t that be a reason to say, “I don’t know what will happen”? And to disallow you from saying, “An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely”?
If you can’t make quantitative predictions, then you can’t say that the foom might take an hour or a day, but not six months.
A lower-bound (of the growth curve) analysis could be sufficient to argue the inevitability of foom.
I agree there’s a time coming when things will happen too fast for humans. But “hard takeoff”, to me, means foom without warning. If the foom doesn’t occur until the AI is smart enough to rewrite an AI textbook, that might give us years or decades of warning. If humans add and improve different cognitive skills to the AI one-by-one, that will start a more gently-sloping RSI.