You could use all of world energy output to have a few billion human speed AGI, or a millions that think 1000x faster, etc.
Isn’t it insanely transformative to have millions of human-level AIs which think 1000x faster?? The difference between top scientists and average humans seems to be something like “software” (Einstein isn’t using 2x the watts or neurons). So then it should be totally possible for each of the “millions of human-level AIs” to be equivalent to Einstein. Couldn’t a million Einstein-level scientists running at 1000x speed could beat all human scientists combined? And, taking this further, it seems that some humans are at least 100x more productive at science than others, despite the same brain constraints. Then why shouldn’t it be possible to go further in that direction, and have someone 100x more productive than Einstein at the same flops? And if this is possible, it seems to me like whatever efficiency constraints the brain is achieving cannot be a barrier to foom, just as the energy efficiency (and supposed learning optimality?) of the average human brain does not rule out Einstein more than 100x-ing them with the same flops.
Of course, my argument doesn’t pin down the nature or rate of software-driven takeoff, or whether there is some ceiling. Just that the “efficiency” arguments don’t seem to rule it out, and that there’s no reason to believe that science-per-flop has a ceiling near the level of top humans.
Isn’t it insanely transformative to have millions of human-level AIs which think 1000x faster?? The difference between top scientists and average humans seems to be something like “software” (Einstein isn’t using 2x the watts or neurons). So then it should be totally possible for each of the “millions of human-level AIs” to be equivalent to Einstein. Couldn’t a million Einstein-level scientists running at 1000x speed could beat all human scientists combined?
And, taking this further, it seems that some humans are at least 100x more productive at science than others, despite the same brain constraints. Then why shouldn’t it be possible to go further in that direction, and have someone 100x more productive than Einstein at the same flops? And if this is possible, it seems to me like whatever efficiency constraints the brain is achieving cannot be a barrier to foom, just as the energy efficiency (and supposed learning optimality?) of the average human brain does not rule out Einstein more than 100x-ing them with the same flops.
Yes it will be transformative.
GPT models already think 1000x to 10000x faster—but only for the learning stage (absorbing knowledge), not for inference (thinking new thoughts).
Of course, my argument doesn’t pin down the nature or rate of software-driven takeoff, or whether there is some ceiling. Just that the “efficiency” arguments don’t seem to rule it out, and that there’s no reason to believe that science-per-flop has a ceiling near the level of top humans.