Question: Why don’t people talk about Ems / Uploads as just as disastrous as uncontrolled AGI? Has there been work done or discussion about the friendliness of Ems / Uploads?
Details: Robin Hanson seems to describe the Em age like a new industrial revolution. Eliezer seems to, well, he seems wary of them but doesn’t seem to treat them like an existential threat. Though Nick Bostrom sees them as an existential threat. A lot of people on Lesswrong seem to talk of it as the next great journey for humanity, and not just a different name for uFAI. For my part, I can’t imagine uploads ending up good. I literally can’t imagine it. Every scenario I’ve tried to imagine ends up with a bad end.
As soon as the first upload is successful then patient zero will realize he’s got unimaginable (brain)power, will start talking in ALL CAPS, and go FOOM on the world, bad end. For the sake of argument, lets say we get lucky and first upload is incredibly nice, and just wants to help people. Eventually the second, or the third, or the twenty fifth upload decides to FOOM over everybody. It’s still bad end. We need to have some way to restrain Ems from FOOM-ing, and we need to figure it out before we start uploading. Okay, lets pretend we could even invent a restraint that works against a determined transhiman who is unimaginably more intelligent than us...
Maybe we’ll get as far as, say, Hanson’s Em society. Ems make copies of themselves tailored to situations to complete work. Some of these copies will choose to / be able to replicate more than others; these copies will inherit this propensity to replicate; eventually, processor-time / RAM-time / hard-disk space will become scarce and things won’t be able to copy as well and will have to fight to not have their processes terminated. Welp… that sounds like the 3 ingredients required to invoke the evolution fairy. Except instead of it being the Darwinian evolution we’re used to, this new breed will employ a terrifying mix of uFIA self-modification and Lamarckian super-evolution. Bad end. Okay, but lets say we find some way to stop THAT...
What about other threats? Ems can still talk to one another and convince one another of things. How do we know they won’t all be hijacked by meme-viruses, and transformed Agent Smith style? That’s a bad end. Or hell, how do we know they won’t be hijacked by virus-viruses? Bad end there too. Or one of the trillions of Ems could build a uFAI and it goes FOOM into a Bad End. Or… The potential for Bad Ends is enormous and you only need one for the end of humanity.
It’s not like flesh-based humans can monitor the system. Once ems are in the 1,000,000x era, they’ll be effectively decoupled from humanity. A revolution could start at 10pm after the evening shift goes home, and by the time the morning shift gets in, it’s been 1,000 years in Em subjective time. Hell, in the time it takes to swing an axe and cut the network/power cable, they’ve had about a month to manage their migration and dissemination to every electronic device in the world. Any regulation has to be built inside the Em system and, as mentioned before, it has to be built before we make the first successful upload.
Maybe we can build an invincible regulator or regulation institution to control it all. But we can’t let it self-replicate or we’ll be right back at the evolution problem again. And we can’t let it be modified by the outside world or it’ll be the hijacking problem again. And we can’t let it self-modify, or it’ll evolve in ways we can’t predict (and we’ve already established that it’ll be outside of everything else’s control). So now we have an invulnerable regulator/regulation system that needs to control a world of trillions. And once our Ems start living in 1,000,000x space, it needs to keep order for literally millions of years without ever making a mistake once. So we need to design a system perfect enough to never make a single error while handling trillions of agents for millions of years?
That strikes me as a problem that’s just as hard as FAI. There seems like no way to solve it that doesn’t involve a friendly AGI controlling the upload world.
Can anyone explain to me why Ems are looked at as a competing technology to FAI instead an existential risk with probability of 1.0?
As soon as the first upload is successful then patient zero will realize he’s got unimaginable (brain)power, will start talking in ALL CAPS, and go FOOM on the world, bad end. For the sake of argument, lets say we get lucky and first upload is incredibly nice, and just wants to help people. Eventually the second, or the third, or the twenty fifth upload decides to FOOM over everybody. It’s still bad end.
Why can’t the first upload FOOM, but in a nice way?
That strikes me as a problem that’s just as hard as FAI. There seems like no way to solve it that doesn’t involve a friendly AGI controlling the upload world.
Some people suggest uploads only as a stepping stone to FAI. But if you read Carl’s paper (linked above) there are also ideas for how to create stable superorganisms out of uploads that can potentially solve your regulation problem.
As for friendly upload FOOMs, I consider the chance of them happening at random about equivalent to FIA happening at random.
(I guess “FIA” is a typo for “FAI”?) Why talk about “at random” if we are considering which technology to pursue as the best way to achieve a positive Singularity? From what I can tell, the dangers involved in an upload-based FOOM are limited and foreseeable, and we at least have ideas to solve all of them:
unfriendly values in scanned subject (pick the subject carefully)
inaccurate scanning/modeling (do a lot of testing before running upload at human/superhuman speeds)
value change as a function of subjective time (periodic reset)
value change due to competitive evolution (take over the world and form a singleton)
value change due to self-modification (after forming a singleton, research self-modification and other potentially dangerous technologies such as FAI thoroughly before attempting to apply them)
Whereas FAI could fail in a dangerous way as a result of incorrectly solving one of many philosophical and technical problems (a large portion of which we are still thoroughly confused about) or due to some seemingly innocuous but erroneous design assumption whose danger is hard to foresee.
Wei, do you assume uploading capability would stay local for long stretches of subjective time? If yes, why? (WBE seems to require large-scale technological development, which I’d expect to be fueled by many institutions buying the tech and thus fueling progress—compare genome sequencing—so I’d expect multiple places to have the same currently-most-advanced systems at any point in time, or at least being close to the bleeding edge.) If no, why expect the uploads that go FOOM first to be ones that work hard to improve chances of friendliness, rather than primarily working hard to be the first to FOOM?
Wei, do you assume uploading capability would stay local for long stretches of subjective time?
No, but there are ways for this to happen that seem more plausible to me than what’s needed for FAI to be successful, such as a Manhattan-style project by a major government that recognizes the benefits of obtaining a large lead in uploading technology.
Ems can still talk to one another and convince one another of things. How do we know they won’t all be hijacked by meme-viruses, and transformed Agent Smith style?
This one is a little silly. Humans get hijacked by meme-viruses as well, all the time; it does cause problems, but mostly other humans manage to keep them line.
But as for the rest, yes, I agree with you that an upload scenario would have huge risks as well. Not to mention the fact that there might be a considerable pressure towards uploads merging together and ceasing to be individuals in any meaningful sense of the term. Humanity’s future seems pretty hopeless to me.
As soon as the first upload is successful then patient zero will realize he’s got unimaginable (brain)power, will start talking in ALL CAPS, and go FOOM on the world, bad end.
Now, I have to admit I’m not too familiar with the local discourse re:uploading, but if a functional upload requires emulation down to individual ion channels (PSICS-level) and the chemical environment, I find it hard to believe we’ll have the computer power to do that, a million times faster, and in a volume of space small enough that we don’t have to put it under a constant waterfall of liquid Helium.
I don’t expect femtotechnology or rod logic any time soon, the former may not even be possible at all and the latter is based on some dubious math from Nanosystems; so where does that leave us in terms of computing power? (Assuming, of course, that Clarke’s law is a wish-fulfilling fantasy). I understand the reach of Bremermann’s Limit, but it may not be possible to reach it, or there may be areas in between zero and the Limit that are unreachable for lack of a physical substrate for them.
Ems have similar human psychology, with adds. I presume they can’t escalate well as AIs, even in coalescence cases.
Possible dangers come with the same dangers of now but with more structural changes. If they have artificial agents in they realm, some cheap nanotech, etc. Conflicts has costs too.
Question: Why don’t people talk about Ems / Uploads as just as disastrous as uncontrolled AGI? Has there been work done or discussion about the friendliness of Ems / Uploads?
Details: Robin Hanson seems to describe the Em age like a new industrial revolution. Eliezer seems to, well, he seems wary of them but doesn’t seem to treat them like an existential threat. Though Nick Bostrom sees them as an existential threat. A lot of people on Lesswrong seem to talk of it as the next great journey for humanity, and not just a different name for uFAI. For my part, I can’t imagine uploads ending up good. I literally can’t imagine it. Every scenario I’ve tried to imagine ends up with a bad end.
As soon as the first upload is successful then patient zero will realize he’s got unimaginable (brain)power, will start talking in ALL CAPS, and go FOOM on the world, bad end. For the sake of argument, lets say we get lucky and first upload is incredibly nice, and just wants to help people. Eventually the second, or the third, or the twenty fifth upload decides to FOOM over everybody. It’s still bad end. We need to have some way to restrain Ems from FOOM-ing, and we need to figure it out before we start uploading. Okay, lets pretend we could even invent a restraint that works against a determined transhiman who is unimaginably more intelligent than us...
Maybe we’ll get as far as, say, Hanson’s Em society. Ems make copies of themselves tailored to situations to complete work. Some of these copies will choose to / be able to replicate more than others; these copies will inherit this propensity to replicate; eventually, processor-time / RAM-time / hard-disk space will become scarce and things won’t be able to copy as well and will have to fight to not have their processes terminated. Welp… that sounds like the 3 ingredients required to invoke the evolution fairy. Except instead of it being the Darwinian evolution we’re used to, this new breed will employ a terrifying mix of uFIA self-modification and Lamarckian super-evolution. Bad end. Okay, but lets say we find some way to stop THAT...
What about other threats? Ems can still talk to one another and convince one another of things. How do we know they won’t all be hijacked by meme-viruses, and transformed Agent Smith style? That’s a bad end. Or hell, how do we know they won’t be hijacked by virus-viruses? Bad end there too. Or one of the trillions of Ems could build a uFAI and it goes FOOM into a Bad End. Or… The potential for Bad Ends is enormous and you only need one for the end of humanity.
It’s not like flesh-based humans can monitor the system. Once ems are in the 1,000,000x era, they’ll be effectively decoupled from humanity. A revolution could start at 10pm after the evening shift goes home, and by the time the morning shift gets in, it’s been 1,000 years in Em subjective time. Hell, in the time it takes to swing an axe and cut the network/power cable, they’ve had about a month to manage their migration and dissemination to every electronic device in the world. Any regulation has to be built inside the Em system and, as mentioned before, it has to be built before we make the first successful upload.
Maybe we can build an invincible regulator or regulation institution to control it all. But we can’t let it self-replicate or we’ll be right back at the evolution problem again. And we can’t let it be modified by the outside world or it’ll be the hijacking problem again. And we can’t let it self-modify, or it’ll evolve in ways we can’t predict (and we’ve already established that it’ll be outside of everything else’s control). So now we have an invulnerable regulator/regulation system that needs to control a world of trillions. And once our Ems start living in 1,000,000x space, it needs to keep order for literally millions of years without ever making a mistake once. So we need to design a system perfect enough to never make a single error while handling trillions of agents for millions of years?
That strikes me as a problem that’s just as hard as FAI. There seems like no way to solve it that doesn’t involve a friendly AGI controlling the upload world.
Can anyone explain to me why Ems are looked at as a competing technology to FAI instead an existential risk with probability of 1.0?
http://lesswrong.com/lw/66n/resetting_gandhieinstein/
http://lesswrong.com/r/discussion/lw/5jb/link_whole_brain_emulation_and_the_evolution_of/
Why can’t the first upload FOOM, but in a nice way?
Some people suggest uploads only as a stepping stone to FAI. But if you read Carl’s paper (linked above) there are also ideas for how to create stable superorganisms out of uploads that can potentially solve your regulation problem.
Thank you for the links, they were exactly what I was looking for.
As for friendly upload FOOMs, I consider the chance of them happening at random about equivalent to FIA happening at random.
(I guess “FIA” is a typo for “FAI”?) Why talk about “at random” if we are considering which technology to pursue as the best way to achieve a positive Singularity? From what I can tell, the dangers involved in an upload-based FOOM are limited and foreseeable, and we at least have ideas to solve all of them:
unfriendly values in scanned subject (pick the subject carefully)
inaccurate scanning/modeling (do a lot of testing before running upload at human/superhuman speeds)
value change as a function of subjective time (periodic reset)
value change due to competitive evolution (take over the world and form a singleton)
value change due to self-modification (after forming a singleton, research self-modification and other potentially dangerous technologies such as FAI thoroughly before attempting to apply them)
Whereas FAI could fail in a dangerous way as a result of incorrectly solving one of many philosophical and technical problems (a large portion of which we are still thoroughly confused about) or due to some seemingly innocuous but erroneous design assumption whose danger is hard to foresee.
Wei, do you assume uploading capability would stay local for long stretches of subjective time? If yes, why? (WBE seems to require large-scale technological development, which I’d expect to be fueled by many institutions buying the tech and thus fueling progress—compare genome sequencing—so I’d expect multiple places to have the same currently-most-advanced systems at any point in time, or at least being close to the bleeding edge.) If no, why expect the uploads that go FOOM first to be ones that work hard to improve chances of friendliness, rather than primarily working hard to be the first to FOOM?
No, but there are ways for this to happen that seem more plausible to me than what’s needed for FAI to be successful, such as a Manhattan-style project by a major government that recognizes the benefits of obtaining a large lead in uploading technology.
Ok, thanks for clarifying!
This one is a little silly. Humans get hijacked by meme-viruses as well, all the time; it does cause problems, but mostly other humans manage to keep them line.
But as for the rest, yes, I agree with you that an upload scenario would have huge risks as well. Not to mention the fact that there might be a considerable pressure towards uploads merging together and ceasing to be individuals in any meaningful sense of the term. Humanity’s future seems pretty hopeless to me.
Human uploads have been discussed as dangerous. But a friendly AI is viewed as an easier goal than a friendly upload, because an AI can be designed.
Now, I have to admit I’m not too familiar with the local discourse re:uploading, but if a functional upload requires emulation down to individual ion channels (PSICS-level) and the chemical environment, I find it hard to believe we’ll have the computer power to do that, a million times faster, and in a volume of space small enough that we don’t have to put it under a constant waterfall of liquid Helium.
I don’t expect femtotechnology or rod logic any time soon, the former may not even be possible at all and the latter is based on some dubious math from Nanosystems; so where does that leave us in terms of computing power? (Assuming, of course, that Clarke’s law is a wish-fulfilling fantasy). I understand the reach of Bremermann’s Limit, but it may not be possible to reach it, or there may be areas in between zero and the Limit that are unreachable for lack of a physical substrate for them.
Ems have similar human psychology, with adds. I presume they can’t escalate well as AIs, even in coalescence cases.
Possible dangers come with the same dangers of now but with more structural changes. If they have artificial agents in they realm, some cheap nanotech, etc. Conflicts has costs too.