As soon as the first upload is successful then patient zero will realize he’s got unimaginable (brain)power, will start talking in ALL CAPS, and go FOOM on the world, bad end. For the sake of argument, lets say we get lucky and first upload is incredibly nice, and just wants to help people. Eventually the second, or the third, or the twenty fifth upload decides to FOOM over everybody. It’s still bad end.
Why can’t the first upload FOOM, but in a nice way?
That strikes me as a problem that’s just as hard as FAI. There seems like no way to solve it that doesn’t involve a friendly AGI controlling the upload world.
Some people suggest uploads only as a stepping stone to FAI. But if you read Carl’s paper (linked above) there are also ideas for how to create stable superorganisms out of uploads that can potentially solve your regulation problem.
As for friendly upload FOOMs, I consider the chance of them happening at random about equivalent to FIA happening at random.
(I guess “FIA” is a typo for “FAI”?) Why talk about “at random” if we are considering which technology to pursue as the best way to achieve a positive Singularity? From what I can tell, the dangers involved in an upload-based FOOM are limited and foreseeable, and we at least have ideas to solve all of them:
unfriendly values in scanned subject (pick the subject carefully)
inaccurate scanning/modeling (do a lot of testing before running upload at human/superhuman speeds)
value change as a function of subjective time (periodic reset)
value change due to competitive evolution (take over the world and form a singleton)
value change due to self-modification (after forming a singleton, research self-modification and other potentially dangerous technologies such as FAI thoroughly before attempting to apply them)
Whereas FAI could fail in a dangerous way as a result of incorrectly solving one of many philosophical and technical problems (a large portion of which we are still thoroughly confused about) or due to some seemingly innocuous but erroneous design assumption whose danger is hard to foresee.
Wei, do you assume uploading capability would stay local for long stretches of subjective time? If yes, why? (WBE seems to require large-scale technological development, which I’d expect to be fueled by many institutions buying the tech and thus fueling progress—compare genome sequencing—so I’d expect multiple places to have the same currently-most-advanced systems at any point in time, or at least being close to the bleeding edge.) If no, why expect the uploads that go FOOM first to be ones that work hard to improve chances of friendliness, rather than primarily working hard to be the first to FOOM?
Wei, do you assume uploading capability would stay local for long stretches of subjective time?
No, but there are ways for this to happen that seem more plausible to me than what’s needed for FAI to be successful, such as a Manhattan-style project by a major government that recognizes the benefits of obtaining a large lead in uploading technology.
http://lesswrong.com/lw/66n/resetting_gandhieinstein/
http://lesswrong.com/r/discussion/lw/5jb/link_whole_brain_emulation_and_the_evolution_of/
Why can’t the first upload FOOM, but in a nice way?
Some people suggest uploads only as a stepping stone to FAI. But if you read Carl’s paper (linked above) there are also ideas for how to create stable superorganisms out of uploads that can potentially solve your regulation problem.
Thank you for the links, they were exactly what I was looking for.
As for friendly upload FOOMs, I consider the chance of them happening at random about equivalent to FIA happening at random.
(I guess “FIA” is a typo for “FAI”?) Why talk about “at random” if we are considering which technology to pursue as the best way to achieve a positive Singularity? From what I can tell, the dangers involved in an upload-based FOOM are limited and foreseeable, and we at least have ideas to solve all of them:
unfriendly values in scanned subject (pick the subject carefully)
inaccurate scanning/modeling (do a lot of testing before running upload at human/superhuman speeds)
value change as a function of subjective time (periodic reset)
value change due to competitive evolution (take over the world and form a singleton)
value change due to self-modification (after forming a singleton, research self-modification and other potentially dangerous technologies such as FAI thoroughly before attempting to apply them)
Whereas FAI could fail in a dangerous way as a result of incorrectly solving one of many philosophical and technical problems (a large portion of which we are still thoroughly confused about) or due to some seemingly innocuous but erroneous design assumption whose danger is hard to foresee.
Wei, do you assume uploading capability would stay local for long stretches of subjective time? If yes, why? (WBE seems to require large-scale technological development, which I’d expect to be fueled by many institutions buying the tech and thus fueling progress—compare genome sequencing—so I’d expect multiple places to have the same currently-most-advanced systems at any point in time, or at least being close to the bleeding edge.) If no, why expect the uploads that go FOOM first to be ones that work hard to improve chances of friendliness, rather than primarily working hard to be the first to FOOM?
No, but there are ways for this to happen that seem more plausible to me than what’s needed for FAI to be successful, such as a Manhattan-style project by a major government that recognizes the benefits of obtaining a large lead in uploading technology.
Ok, thanks for clarifying!