I don’t think “exponential” vs “superlinear” or even “sublinear” matters much. Those are all terms for asymptotic behaviour, in the far future, and all the problems are in the relatively short term after the first AGI.
For FOOM purposes, how long it takes to get from usefully human level capabilities to as far above us as we are above chimpanzees (let’s call it 300 IQ for short, despite the technical absurdity of the label) is possibly the most relevant timescale.
Could a few hundred humans wipe out a world full of chimpanzees in the long term? I’m pretty sure the answer is yes. If there exists an AGI that is as far above us as a human is above a chimpanzee, how long does it take for there to be a few hundred of them? My median estimate is “a few years” if the first one takes Manhattan Project levels of investment, or less if it doesn’t.
After that point, our existence is no longer mostly in our own hands. If we’re detrimental to whatever goals the AGI(s) have, we have a serious risk of becoming extinct shortly thereafter. Whether they FOOM to Jupiter-brain levels in 1 year or merely populate the Earth with more copies of 300 IQ AGIs that never grow past that in the next million years is irrelevant.
Personally I think the answer to the “first AGI to 300 IQ” timescale is somewhere between “0 days, we were in an overhang and the same advance that made AGI also made superhuman AGI” on the short end and something like 20 years on the long end. I tend toward the overhang end because I’ve seen a lot of algorithm improvements make large discontinuous jumps in capability in lots of fields including ML, and because I think that human capability range is a very narrow and twisty target to hit in absolute terms. By the time you get the weakest factor’s capabilities up to minimum human-like General Intelligence levels, probably the rest of the factors are already superhuman.
So in my personal timescale estimates, that leaves “how long before we get the first AGI” as the most relevant.
I don’t think “exponential” vs “superlinear” or even “sublinear” matters much. Those are all terms for asymptotic behaviour, in the far future, and all the problems are in the relatively short term after the first AGI.
For FOOM purposes, how long it takes to get from usefully human level capabilities to as far above us as we are above chimpanzees (let’s call it 300 IQ for short, despite the technical absurdity of the label) is possibly the most relevant timescale.
Could a few hundred humans wipe out a world full of chimpanzees in the long term? I’m pretty sure the answer is yes. If there exists an AGI that is as far above us as a human is above a chimpanzee, how long does it take for there to be a few hundred of them? My median estimate is “a few years” if the first one takes Manhattan Project levels of investment, or less if it doesn’t.
After that point, our existence is no longer mostly in our own hands. If we’re detrimental to whatever goals the AGI(s) have, we have a serious risk of becoming extinct shortly thereafter. Whether they FOOM to Jupiter-brain levels in 1 year or merely populate the Earth with more copies of 300 IQ AGIs that never grow past that in the next million years is irrelevant.
Personally I think the answer to the “first AGI to 300 IQ” timescale is somewhere between “0 days, we were in an overhang and the same advance that made AGI also made superhuman AGI” on the short end and something like 20 years on the long end. I tend toward the overhang end because I’ve seen a lot of algorithm improvements make large discontinuous jumps in capability in lots of fields including ML, and because I think that human capability range is a very narrow and twisty target to hit in absolute terms. By the time you get the weakest factor’s capabilities up to minimum human-like General Intelligence levels, probably the rest of the factors are already superhuman.
So in my personal timescale estimates, that leaves “how long before we get the first AGI” as the most relevant.