A number of people are objecting to Eliezer’s claim that the process he is discussing is unique in its FOOM potential, proposing other processes that are similar. Then Eliezer says they aren’t similar.
Whether they’re similar enough depends on the analysis you want to do. If you want to glance at them and come up with yes or no answer regarding FOOM, then none of them are similar. A key difference is that these other things don’t have continual halving of the time per generation. You can account for this when comparing results, but I haven’t seen anyone do this.
But some things are similar enough that you can gain some insights into the AI FOOM potential by looking at them. Consider the growth of human societies. A human culture/civilization/government produces ideas, values, and resources used to rewrite itself. This is similar to the AI FOOM dynamics, except with constant and long generation times.
To a tribesman contemplating the forthcoming culture FOOM, it would look pretty simple: Culture is about ways for your tribe to get more land than other tribes.
As culture progressed, we developed all sorts of new goals for it that the tribesman couldn’t have predicted.
Analogously, our discussion of the AI FOOM supposes that the AI will not discover new avenues to pursue other than intelligence, that soak up enough of the FOOM to slow down the intelligence part of the FOOM considerably. (Further analysis of this is difficult since we haven’t agreed what “intelligence” is.)
Another lesson to learn from culture has to do with complexity. The tribesman, given some ideas of what technology and government would do, would suppose that it would solve all problems. But in fact, as cultures grow more capable, they are able to sustain more complexity; and so our problems get more and more complicated. The idea that human stupidity is holding us back, and AIs will burst into exponential territory once they shake free of these shackles:
I suspect that human economic growth would naturally tend to be faster and somewhat more superexponential, if it were not for the negative feedback mechanism of governments and bureaucracies with poor incentives, that both expand and hinder whenever times are sufficiently good that no one is objecting strongly enough to stop it
is like that tribesman thinking good government will solve all problems. Systems—societies, governments, AIs—expand to the limits of complexity that they can support; at those limits, actions have unintended consequences and agents have not quite enough intelligence to predict them or agree on them, and in efficiency and “stupidity”—relative stupidity—lives on.
I’ll respond to Eliezer’s response to my response later today. Short answer: 1. Diminishing returns exist and are powerful. 2. This isn’t something you can eyeball. If you want to say FOOM is probable; fine. If you want to say FOOM is almost inevitable, I want to see equations worked out with specific numbers. You won’t convince me with handwaving, especially when other smart people are waving their hands and reaching different conclusions.
A number of people are objecting to Eliezer’s claim that the process he is discussing is unique in its FOOM potential, proposing other processes that are similar. Then Eliezer says they aren’t similar.
Whether they’re similar enough depends on the analysis you want to do. If you want to glance at them and come up with yes or no answer regarding FOOM, then none of them are similar. A key difference is that these other things don’t have continual halving of the time per generation. You can account for this when comparing results, but I haven’t seen anyone do this.
But some things are similar enough that you can gain some insights into the AI FOOM potential by looking at them. Consider the growth of human societies. A human culture/civilization/government produces ideas, values, and resources used to rewrite itself. This is similar to the AI FOOM dynamics, except with constant and long generation times.
To a tribesman contemplating the forthcoming culture FOOM, it would look pretty simple: Culture is about ways for your tribe to get more land than other tribes.
As culture progressed, we developed all sorts of new goals for it that the tribesman couldn’t have predicted.
Analogously, our discussion of the AI FOOM supposes that the AI will not discover new avenues to pursue other than intelligence, that soak up enough of the FOOM to slow down the intelligence part of the FOOM considerably. (Further analysis of this is difficult since we haven’t agreed what “intelligence” is.)
Another lesson to learn from culture has to do with complexity. The tribesman, given some ideas of what technology and government would do, would suppose that it would solve all problems. But in fact, as cultures grow more capable, they are able to sustain more complexity; and so our problems get more and more complicated. The idea that human stupidity is holding us back, and AIs will burst into exponential territory once they shake free of these shackles:
is like that tribesman thinking good government will solve all problems. Systems—societies, governments, AIs—expand to the limits of complexity that they can support; at those limits, actions have unintended consequences and agents have not quite enough intelligence to predict them or agree on them, and in efficiency and “stupidity”—relative stupidity—lives on.I’ll respond to Eliezer’s response to my response later today. Short answer: 1. Diminishing returns exist and are powerful. 2. This isn’t something you can eyeball. If you want to say FOOM is probable; fine. If you want to say FOOM is almost inevitable, I want to see equations worked out with specific numbers. You won’t convince me with handwaving, especially when other smart people are waving their hands and reaching different conclusions.