It also seems like a non-obvious point. For example, when I. J. Good coined the term “intelligence explosion”, it was conceived as the result of designing an ultraintelligent machine. So for explosion to precede superintelligence flips the original concept on its head.
That’s not quite right. What Good (a mathematician) is actually arguing is an existence/upper-bound argument:
...2. Ultraintelligent Machines and Their Value: Let an “ultraintelligent machine” be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind (see for example Good 1951, [34], [44]). Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.
That is, he is giving an upper-bound condition which “unquestionably” leads to an intelligence explosion: assume an AI superior to “any man however clever”; designing the ultraintelligent machine, if it exists, must have been done by either a less intelligent machine (in which case the argument is already over) or the cleverest man or some less clever man; therefore, the ultraintelligent machine must by definition be able to design a better ultraintelligent machine (since it is superior to even the cleverest man). If you did not argue that way by invoking an ultraintelligent machine, you would have only a weaker argument: perhaps the AI is as clever as the second-most clever man, but it actually needs to be as clever as the most clever man—then it would be questionable whether the intelligence explosion is possible. (And people do in fact make this kind of argument routinely, often with a computational-complexity coating.) So he simply makes a stronger assumption to close the loophole and get on with more interesting things than debating the bare possibility.
His immediately following discussion of economics and human-AI hybrid systems suggests that he doesn’t deny that merely-ordinary-human level intelligences could potentially do an intelligence explosion, and indeed, probably strongly suspects it is possible, but he just thinks that the question is irrelevant because it is too unstable an equilibrium: the payoff from merely human-level AIs is too low, and the payoff from a true ultraintelligence is so absurdly high that anyone with a human-level AI would intend to simply spend the extra money to increase its compute power to ultraintelligence* and then kick off an intelligence explosion for sure, per his previous proof. There would be little waiting around for inefficiently-human-level AIs to dick around long enough improving themselves autonomously to reach ultraintelligence. See his earlier paper where he also puts emphasis on the ‘unquestionability’ of the argument, but phrased a bit differently in terms of matching a ‘Newton’, or this part where he makes it explicitly and also makes a very familiar-sounding compute-overhang argument (I believe this is the first time this particular I. J. Good quote has been highlighted, given how hard it is to find these papers):
It seems probable that no mechanical brain will be really useful until it is somewhere near to the critical size. If so, there will be only a very short transition period between having no very good machine and having a great many exceedingly good ones. Therefore the work on simulation of artificial intelligence on general-purpose computers is especially important, because it will lengthen the transition period, and give human beings a chance to adapt to the future situation.
* interesting difference here: he seems to think that it would only require, say, 2-3x more computing power, and thus only 2-3x more budget, to go from human-level intelligence to ultraintelligence, noting that the human brain is about that much of a factor larger than a chimpanzee brain. This is a reasonable claim: if an AI company like OA could spend $300m instead of $100m to make a GPT-5-scale model strictly superhuman rather than GPT-4-performance level, don’t you think at least one of them would do so in a heartbeat? But people today would probably argue, on the basis of all the power-laws, that it would require much more than that because all your budget is going into training (as opposed to the 2-3x greater computing power to simply run it) and you would need more like 100x the budget. This is a disagreement worth pondering.
“interesting difference here: he seems to think that it would only require, say, 2-3x more computing power, and thus only 2-3x more budget, to go from human-level intelligence to ultraintelligence, noting that the human brain is about that much of a factor larger than a chimpanzee brain”
That seems obviously unjustified. I expect you can reduce a human brain by 2x and still have them be essentially human. The metric could be the max intelligence you can get with a chimp number of neurons/synapses which I expect is probably an IQ 80+ human. There was an “overhang” situation with chimp brains where they could have been optimized to be much better at abstract thought with the same brainpower but it didn’t happen.
That’s not quite right. What Good (a mathematician) is actually arguing is an existence/upper-bound argument:
That is, he is giving an upper-bound condition which “unquestionably” leads to an intelligence explosion: assume an AI superior to “any man however clever”; designing the ultraintelligent machine, if it exists, must have been done by either a less intelligent machine (in which case the argument is already over) or the cleverest man or some less clever man; therefore, the ultraintelligent machine must by definition be able to design a better ultraintelligent machine (since it is superior to even the cleverest man). If you did not argue that way by invoking an ultraintelligent machine, you would have only a weaker argument: perhaps the AI is as clever as the second-most clever man, but it actually needs to be as clever as the most clever man—then it would be questionable whether the intelligence explosion is possible. (And people do in fact make this kind of argument routinely, often with a computational-complexity coating.) So he simply makes a stronger assumption to close the loophole and get on with more interesting things than debating the bare possibility.
His immediately following discussion of economics and human-AI hybrid systems suggests that he doesn’t deny that merely-ordinary-human level intelligences could potentially do an intelligence explosion, and indeed, probably strongly suspects it is possible, but he just thinks that the question is irrelevant because it is too unstable an equilibrium: the payoff from merely human-level AIs is too low, and the payoff from a true ultraintelligence is so absurdly high that anyone with a human-level AI would intend to simply spend the extra money to increase its compute power to ultraintelligence* and then kick off an intelligence explosion for sure, per his previous proof. There would be little waiting around for inefficiently-human-level AIs to dick around long enough improving themselves autonomously to reach ultraintelligence. See his earlier paper where he also puts emphasis on the ‘unquestionability’ of the argument, but phrased a bit differently in terms of matching a ‘Newton’, or this part where he makes it explicitly and also makes a very familiar-sounding compute-overhang argument (I believe this is the first time this particular I. J. Good quote has been highlighted, given how hard it is to find these papers):
* interesting difference here: he seems to think that it would only require, say, 2-3x more computing power, and thus only 2-3x more budget, to go from human-level intelligence to ultraintelligence, noting that the human brain is about that much of a factor larger than a chimpanzee brain. This is a reasonable claim: if an AI company like OA could spend $300m instead of $100m to make a GPT-5-scale model strictly superhuman rather than GPT-4-performance level, don’t you think at least one of them would do so in a heartbeat? But people today would probably argue, on the basis of all the power-laws, that it would require much more than that because all your budget is going into training (as opposed to the 2-3x greater computing power to simply run it) and you would need more like 100x the budget. This is a disagreement worth pondering.
“interesting difference here: he seems to think that it would only require, say, 2-3x more computing power, and thus only 2-3x more budget, to go from human-level intelligence to ultraintelligence, noting that the human brain is about that much of a factor larger than a chimpanzee brain”
That seems obviously unjustified. I expect you can reduce a human brain by 2x and still have them be essentially human. The metric could be the max intelligence you can get with a chimp number of neurons/synapses which I expect is probably an IQ 80+ human. There was an “overhang” situation with chimp brains where they could have been optimized to be much better at abstract thought with the same brainpower but it didn’t happen.