I agree with the point about how any intelligence that constructs a supercomputer is already superhuman, even if it’s just humans working in concert. I think this provides a major margin of safety. I am not quite as skeptical of takeoff overall as you seem to be. But a big science style effort is likely to minimize a lot of risks while a small one is not likely to succeed at all.
Brain emulation is definitely hard, but no other route even approaches plausibility currently. We’re 5 breakthroughs away from brain emulation, and 8 away from anything else. So using brain emulation as one possible scenario isn’t totally unreasonable imo.
Why do you expect “foom” from brain emulation, though?
My theory is that such expectations are driven by it being so far away that it is hard to picture us getting there gradually, instead you picture skipping straight to some mind upload that can run a thousand copies of itself or the like...
What I expect from the first “mind upload” is a simulated epileptic seizure. Refined gradually into some minor functionality. It is not an actual upload, either, just some samples of different human brains were used to infer general network topology and the like, and that has been simulated, and learns things, running at below realtime. On a computer that is consuming many megawatts of power, and costs more per day than the most expensive movie star or the like. The computer for price of which you could hire a hundred qualified engineers each thinking perhaps 10 times faster than this machine. Gradually refined—with immense difficulty—into human level performance. Nothing like some easy ways to make it smarter—these were used to make it work earlier.
This would be contemporary to (and making use of) software that can and did—by simulation and refinement of parameters—do utterly amazing things—more advanced variation of the software that can design ultra efficient turbine blades and the like today. (Non-AI, non autonomous software which can also be used to design DNA for some cancer-curing virus, or—by a deliberately malicious normal humans—everyone-killing virus, or the like, rendering the upload itself fairly irrelevant as a threat).
What I suspect many futurists imagine, is the mind upload of full working human mind, appearing in the computer, talking and the like—starting point, their mental model got here by magic, not by imagining actual progress. Then there’s some easy tweaks, which are again magicked into the mental model, no reduction to anything. The imaginary upload strains one’s mental simulator’s capacity quite a bit, and in the futurist’s mental model, it is not contemporary to any particularly cool technology. So the mind upload enjoys the advantages akin to that of a modern army sent back in time into 1000 BC (with nothing needing any fuel to operate or runways to take off from). And so the imaginary mind upload easily takes over the imaginary world.
I think your points are valid. I don’t expect FOOM from anything, necessarily, I just find it plausible (based on Eliezer’s arguments about all the possible methods of scaling that might be available to an AI).
I am pitching my arguments towards people who expect FOOM, but the possibility of non-FOOM for a longish while is very real.
And It is probably unwarranted to say anything about architecture, yo’ure right.
But Suppose we have human-level AIs, then decide to consciously build a substantially superhuman AI. Or we have superhuman AIs that can’t FOOM, and actively seek to make one that can. The same points apply.
It seems to me that this argument (and arguments which rely on unspecified methods and the like) boils down to breaking the world model to add things with unclear creation history and unclear decomposition into components, and resulting non-reductionist magic infested mental world model misbehaving. Just as it always did in the human history, yielding gods and the like.
You postulate that unspecific magic can create superhuman intelligence—it arises without mental model of necessary work, problems being solved, returns diminishing, and available optimizations being exhausted—is it a surprise that in this broken mental model (broken because we don’t know how the AI would be built), because the work is absent, the superhuman intelligence in question creates a greater still intelligence in days, merely continuing the trend of it’s unspecific creation? If it’s not at all surprising then it’s not informative that mental model goes in this direction.
I agree with the point about how any intelligence that constructs a supercomputer is already superhuman, even if it’s just humans working in concert. I think this provides a major margin of safety. I am not quite as skeptical of takeoff overall as you seem to be. But a big science style effort is likely to minimize a lot of risks while a small one is not likely to succeed at all.
Brain emulation is definitely hard, but no other route even approaches plausibility currently. We’re 5 breakthroughs away from brain emulation, and 8 away from anything else. So using brain emulation as one possible scenario isn’t totally unreasonable imo.
Why do you expect “foom” from brain emulation, though?
My theory is that such expectations are driven by it being so far away that it is hard to picture us getting there gradually, instead you picture skipping straight to some mind upload that can run a thousand copies of itself or the like...
What I expect from the first “mind upload” is a simulated epileptic seizure. Refined gradually into some minor functionality. It is not an actual upload, either, just some samples of different human brains were used to infer general network topology and the like, and that has been simulated, and learns things, running at below realtime. On a computer that is consuming many megawatts of power, and costs more per day than the most expensive movie star or the like. The computer for price of which you could hire a hundred qualified engineers each thinking perhaps 10 times faster than this machine. Gradually refined—with immense difficulty—into human level performance. Nothing like some easy ways to make it smarter—these were used to make it work earlier.
This would be contemporary to (and making use of) software that can and did—by simulation and refinement of parameters—do utterly amazing things—more advanced variation of the software that can design ultra efficient turbine blades and the like today. (Non-AI, non autonomous software which can also be used to design DNA for some cancer-curing virus, or—by a deliberately malicious normal humans—everyone-killing virus, or the like, rendering the upload itself fairly irrelevant as a threat).
What I suspect many futurists imagine, is the mind upload of full working human mind, appearing in the computer, talking and the like—starting point, their mental model got here by magic, not by imagining actual progress. Then there’s some easy tweaks, which are again magicked into the mental model, no reduction to anything. The imaginary upload strains one’s mental simulator’s capacity quite a bit, and in the futurist’s mental model, it is not contemporary to any particularly cool technology. So the mind upload enjoys the advantages akin to that of a modern army sent back in time into 1000 BC (with nothing needing any fuel to operate or runways to take off from). And so the imaginary mind upload easily takes over the imaginary world.
I think your points are valid. I don’t expect FOOM from anything, necessarily, I just find it plausible (based on Eliezer’s arguments about all the possible methods of scaling that might be available to an AI).
I am pitching my arguments towards people who expect FOOM, but the possibility of non-FOOM for a longish while is very real.
And It is probably unwarranted to say anything about architecture, yo’ure right.
But Suppose we have human-level AIs, then decide to consciously build a substantially superhuman AI. Or we have superhuman AIs that can’t FOOM, and actively seek to make one that can. The same points apply.
It seems to me that this argument (and arguments which rely on unspecified methods and the like) boils down to breaking the world model to add things with unclear creation history and unclear decomposition into components, and resulting non-reductionist magic infested mental world model misbehaving. Just as it always did in the human history, yielding gods and the like.
You postulate that unspecific magic can create superhuman intelligence—it arises without mental model of necessary work, problems being solved, returns diminishing, and available optimizations being exhausted—is it a surprise that in this broken mental model (broken because we don’t know how the AI would be built), because the work is absent, the superhuman intelligence in question creates a greater still intelligence in days, merely continuing the trend of it’s unspecific creation? If it’s not at all surprising then it’s not informative that mental model goes in this direction.