but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI
That’s the main thing that’s worried me about the possibility of ems coming first. But it depends on who is able to upload and who wants to, I suppose. If an average FAI researcher is more likely to upload, increase their speed, and possibly make copies of themselves than an average non-FAI AGI researcher, then it seems like that would be a reduction in risk.
I’m not sure whether that would be the case — a person working on FAI is likely to consider their work to be a matter of life and death, and would want all the speed increases they could get, but an AGI researcher may feel the same way about the threat to their career and status posed by the possibility of someone else getting to AGI first. And if uploading is very expensive at first, it’ll only be the most well-funded AGI researchers (i.e. not SIAI and friends) who will have access to it early on and will be likely to attempt it (if it provides enough of a speed increase that they’d consider it to be worth it).
(I originally thought that uploading would be of little to no help in increasing one’s own intelligence (in ways aside from thinking the same way but faster), since an emulation of a brain isn’t automatically any more comprehensible than an actual brain, but now I can see a few ways it could help — the equivalent of any kind of brain surgery could be attempted quickly, freely, and reversibly, and the same could be said for experimenting with nootropic-type effects within the emulation. So it’s possible that uploaded people would get somewhat smarter and not just faster. Of course, that’s only soft self-improvement, nowhere near the ability to systematically change one’s cognition at the algorithmic level, so I’m not worried about an upload bootstrapping itself to superintelligence (as some people apparently are). Which is good, since humans are not Friendly.)
There’s a lot to respond to here. Some quick points:
It should be born in mind that greatly increased speed and memory may by themselves strongly affect a thinking entity. I imagine that if I could think a million times as fast I would think a lot more carefully about my interactions with the outside world than I do now.
I don’t see any reason to think that SIAI will continue to be the only group thinking about safety considerations. If nothing else, SIAI or FHI can raise awareness of the dangers of AI within the community of AI researchers.
Assuming that brain uploads precede superhuman artificial intelligence, it would obviously be very desirable to have the right sort of human uploaded first.
I presently have a very dim view as to the prospects for modern day humans developing Friendly AI. This skepticism is the main reason why I think that pursuing whole-brain emulations first is more promising. See the comment by Carl that I mentioned in response to Vladimir Nesov’s question. Of course, my attitude on this point is subject to change with incoming evidence.
That’s the main thing that’s worried me about the possibility of ems coming first. But it depends on who is able to upload and who wants to, I suppose. If an average FAI researcher is more likely to upload, increase their speed, and possibly make copies of themselves than an average non-FAI AGI researcher, then it seems like that would be a reduction in risk.
I’m not sure whether that would be the case — a person working on FAI is likely to consider their work to be a matter of life and death, and would want all the speed increases they could get, but an AGI researcher may feel the same way about the threat to their career and status posed by the possibility of someone else getting to AGI first. And if uploading is very expensive at first, it’ll only be the most well-funded AGI researchers (i.e. not SIAI and friends) who will have access to it early on and will be likely to attempt it (if it provides enough of a speed increase that they’d consider it to be worth it).
(I originally thought that uploading would be of little to no help in increasing one’s own intelligence (in ways aside from thinking the same way but faster), since an emulation of a brain isn’t automatically any more comprehensible than an actual brain, but now I can see a few ways it could help — the equivalent of any kind of brain surgery could be attempted quickly, freely, and reversibly, and the same could be said for experimenting with nootropic-type effects within the emulation. So it’s possible that uploaded people would get somewhat smarter and not just faster. Of course, that’s only soft self-improvement, nowhere near the ability to systematically change one’s cognition at the algorithmic level, so I’m not worried about an upload bootstrapping itself to superintelligence (as some people apparently are). Which is good, since humans are not Friendly.)
There’s a lot to respond to here. Some quick points:
It should be born in mind that greatly increased speed and memory may by themselves strongly affect a thinking entity. I imagine that if I could think a million times as fast I would think a lot more carefully about my interactions with the outside world than I do now.
I don’t see any reason to think that SIAI will continue to be the only group thinking about safety considerations. If nothing else, SIAI or FHI can raise awareness of the dangers of AI within the community of AI researchers.
Assuming that brain uploads precede superhuman artificial intelligence, it would obviously be very desirable to have the right sort of human uploaded first.
I presently have a very dim view as to the prospects for modern day humans developing Friendly AI. This skepticism is the main reason why I think that pursuing whole-brain emulations first is more promising. See the comment by Carl that I mentioned in response to Vladimir Nesov’s question. Of course, my attitude on this point is subject to change with incoming evidence.