Security is both built into (design) and bolted onto (passwords, anti-virus software) software. It is build into (structural integrity, headlights, rules of the road) and bolted onto (seatbelts, airbags) cars. Safety will be architecture dependent. Provable safety in the sense that MIRI researches might be awkward to incorporate into many architectures, if it is possible at all.
If an intelligence explosion is possible, it is probably possible with any architecture, but much more efficient with some. But we won’t really know until we experiment enough to at least understand properties of these architectures under naive scaling based on computational resources.
I mention brain emulation specifically because it’s the clearest path we have to artificial intelligence (In the same sense that Fusion is a clear path to supplying global energy needs—the theory is known and sound but the engineering obstacles could put it off indefinitely). And presumably once you can make one brain in silico you could make it smarter than a person’s by a number of methods.
I’m presuming that at some point, we will want an AI that can program other AIs or self-modify in unexpected ways to improve itself.
But you’re right, external safety could be a stopgap not until we could make FOOM-capable AI provably safe, but until we could make FOOM impossible, and keep humans in the driver’s seat.
The bolted on security, though, is never bolted onto some idealized notion originating from fiction. That has all the potential of being even more distant from what’s needed as hypothetical teleport gate safety is from airbags.
As for brain emulation, the necessary computational power is truly immense, and the path towards such is anything but clear.
With regards to foom, it seems to me that the belief in foom is related to certain ignorance with regards to the intelligence already present, or the role of that on the “takeoff”. The combined human (and software) intelligence working on the relevant technologies is already massively superhuman, in the sense of superiority to any individual human. The end result is that the takeoff starts earlier and slower, much like how if you try to bring together chunks of plutonium, due to the substantial level of spontaneous fission already present, the chain reaction will reach massive power level before coefficient of multiplication gets larger than 1.
I agree with the point about how any intelligence that constructs a supercomputer is already superhuman, even if it’s just humans working in concert. I think this provides a major margin of safety. I am not quite as skeptical of takeoff overall as you seem to be. But a big science style effort is likely to minimize a lot of risks while a small one is not likely to succeed at all.
Brain emulation is definitely hard, but no other route even approaches plausibility currently. We’re 5 breakthroughs away from brain emulation, and 8 away from anything else. So using brain emulation as one possible scenario isn’t totally unreasonable imo.
Why do you expect “foom” from brain emulation, though?
My theory is that such expectations are driven by it being so far away that it is hard to picture us getting there gradually, instead you picture skipping straight to some mind upload that can run a thousand copies of itself or the like...
What I expect from the first “mind upload” is a simulated epileptic seizure. Refined gradually into some minor functionality. It is not an actual upload, either, just some samples of different human brains were used to infer general network topology and the like, and that has been simulated, and learns things, running at below realtime. On a computer that is consuming many megawatts of power, and costs more per day than the most expensive movie star or the like. The computer for price of which you could hire a hundred qualified engineers each thinking perhaps 10 times faster than this machine. Gradually refined—with immense difficulty—into human level performance. Nothing like some easy ways to make it smarter—these were used to make it work earlier.
This would be contemporary to (and making use of) software that can and did—by simulation and refinement of parameters—do utterly amazing things—more advanced variation of the software that can design ultra efficient turbine blades and the like today. (Non-AI, non autonomous software which can also be used to design DNA for some cancer-curing virus, or—by a deliberately malicious normal humans—everyone-killing virus, or the like, rendering the upload itself fairly irrelevant as a threat).
What I suspect many futurists imagine, is the mind upload of full working human mind, appearing in the computer, talking and the like—starting point, their mental model got here by magic, not by imagining actual progress. Then there’s some easy tweaks, which are again magicked into the mental model, no reduction to anything. The imaginary upload strains one’s mental simulator’s capacity quite a bit, and in the futurist’s mental model, it is not contemporary to any particularly cool technology. So the mind upload enjoys the advantages akin to that of a modern army sent back in time into 1000 BC (with nothing needing any fuel to operate or runways to take off from). And so the imaginary mind upload easily takes over the imaginary world.
I think your points are valid. I don’t expect FOOM from anything, necessarily, I just find it plausible (based on Eliezer’s arguments about all the possible methods of scaling that might be available to an AI).
I am pitching my arguments towards people who expect FOOM, but the possibility of non-FOOM for a longish while is very real.
And It is probably unwarranted to say anything about architecture, yo’ure right.
But Suppose we have human-level AIs, then decide to consciously build a substantially superhuman AI. Or we have superhuman AIs that can’t FOOM, and actively seek to make one that can. The same points apply.
It seems to me that this argument (and arguments which rely on unspecified methods and the like) boils down to breaking the world model to add things with unclear creation history and unclear decomposition into components, and resulting non-reductionist magic infested mental world model misbehaving. Just as it always did in the human history, yielding gods and the like.
You postulate that unspecific magic can create superhuman intelligence—it arises without mental model of necessary work, problems being solved, returns diminishing, and available optimizations being exhausted—is it a surprise that in this broken mental model (broken because we don’t know how the AI would be built), because the work is absent, the superhuman intelligence in question creates a greater still intelligence in days, merely continuing the trend of it’s unspecific creation? If it’s not at all surprising then it’s not informative that mental model goes in this direction.
Security is both built into (design) and bolted onto (passwords, anti-virus software) software. It is build into (structural integrity, headlights, rules of the road) and bolted onto (seatbelts, airbags) cars. Safety will be architecture dependent. Provable safety in the sense that MIRI researches might be awkward to incorporate into many architectures, if it is possible at all.
If an intelligence explosion is possible, it is probably possible with any architecture, but much more efficient with some. But we won’t really know until we experiment enough to at least understand properties of these architectures under naive scaling based on computational resources.
I mention brain emulation specifically because it’s the clearest path we have to artificial intelligence (In the same sense that Fusion is a clear path to supplying global energy needs—the theory is known and sound but the engineering obstacles could put it off indefinitely). And presumably once you can make one brain in silico you could make it smarter than a person’s by a number of methods.
I’m presuming that at some point, we will want an AI that can program other AIs or self-modify in unexpected ways to improve itself.
But you’re right, external safety could be a stopgap not until we could make FOOM-capable AI provably safe, but until we could make FOOM impossible, and keep humans in the driver’s seat.
The bolted on security, though, is never bolted onto some idealized notion originating from fiction. That has all the potential of being even more distant from what’s needed as hypothetical teleport gate safety is from airbags.
As for brain emulation, the necessary computational power is truly immense, and the path towards such is anything but clear.
With regards to foom, it seems to me that the belief in foom is related to certain ignorance with regards to the intelligence already present, or the role of that on the “takeoff”. The combined human (and software) intelligence working on the relevant technologies is already massively superhuman, in the sense of superiority to any individual human. The end result is that the takeoff starts earlier and slower, much like how if you try to bring together chunks of plutonium, due to the substantial level of spontaneous fission already present, the chain reaction will reach massive power level before coefficient of multiplication gets larger than 1.
I agree with the point about how any intelligence that constructs a supercomputer is already superhuman, even if it’s just humans working in concert. I think this provides a major margin of safety. I am not quite as skeptical of takeoff overall as you seem to be. But a big science style effort is likely to minimize a lot of risks while a small one is not likely to succeed at all.
Brain emulation is definitely hard, but no other route even approaches plausibility currently. We’re 5 breakthroughs away from brain emulation, and 8 away from anything else. So using brain emulation as one possible scenario isn’t totally unreasonable imo.
Why do you expect “foom” from brain emulation, though?
My theory is that such expectations are driven by it being so far away that it is hard to picture us getting there gradually, instead you picture skipping straight to some mind upload that can run a thousand copies of itself or the like...
What I expect from the first “mind upload” is a simulated epileptic seizure. Refined gradually into some minor functionality. It is not an actual upload, either, just some samples of different human brains were used to infer general network topology and the like, and that has been simulated, and learns things, running at below realtime. On a computer that is consuming many megawatts of power, and costs more per day than the most expensive movie star or the like. The computer for price of which you could hire a hundred qualified engineers each thinking perhaps 10 times faster than this machine. Gradually refined—with immense difficulty—into human level performance. Nothing like some easy ways to make it smarter—these were used to make it work earlier.
This would be contemporary to (and making use of) software that can and did—by simulation and refinement of parameters—do utterly amazing things—more advanced variation of the software that can design ultra efficient turbine blades and the like today. (Non-AI, non autonomous software which can also be used to design DNA for some cancer-curing virus, or—by a deliberately malicious normal humans—everyone-killing virus, or the like, rendering the upload itself fairly irrelevant as a threat).
What I suspect many futurists imagine, is the mind upload of full working human mind, appearing in the computer, talking and the like—starting point, their mental model got here by magic, not by imagining actual progress. Then there’s some easy tweaks, which are again magicked into the mental model, no reduction to anything. The imaginary upload strains one’s mental simulator’s capacity quite a bit, and in the futurist’s mental model, it is not contemporary to any particularly cool technology. So the mind upload enjoys the advantages akin to that of a modern army sent back in time into 1000 BC (with nothing needing any fuel to operate or runways to take off from). And so the imaginary mind upload easily takes over the imaginary world.
I think your points are valid. I don’t expect FOOM from anything, necessarily, I just find it plausible (based on Eliezer’s arguments about all the possible methods of scaling that might be available to an AI).
I am pitching my arguments towards people who expect FOOM, but the possibility of non-FOOM for a longish while is very real.
And It is probably unwarranted to say anything about architecture, yo’ure right.
But Suppose we have human-level AIs, then decide to consciously build a substantially superhuman AI. Or we have superhuman AIs that can’t FOOM, and actively seek to make one that can. The same points apply.
It seems to me that this argument (and arguments which rely on unspecified methods and the like) boils down to breaking the world model to add things with unclear creation history and unclear decomposition into components, and resulting non-reductionist magic infested mental world model misbehaving. Just as it always did in the human history, yielding gods and the like.
You postulate that unspecific magic can create superhuman intelligence—it arises without mental model of necessary work, problems being solved, returns diminishing, and available optimizations being exhausted—is it a surprise that in this broken mental model (broken because we don’t know how the AI would be built), because the work is absent, the superhuman intelligence in question creates a greater still intelligence in days, merely continuing the trend of it’s unspecific creation? If it’s not at all surprising then it’s not informative that mental model goes in this direction.