At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.
Do you mean that the role of ems is in developing FAI faster (as opposed to biological-human-built FAI), or are you thinking of something else? If ems merely speed time up, they don’t change the shape of FAI challenge much, unless (and to the extent that) we leverage them in a way we can’t for the human society to reduce existential risk before FAI is complete (but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI).
but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI
That’s the main thing that’s worried me about the possibility of ems coming first. But it depends on who is able to upload and who wants to, I suppose. If an average FAI researcher is more likely to upload, increase their speed, and possibly make copies of themselves than an average non-FAI AGI researcher, then it seems like that would be a reduction in risk.
I’m not sure whether that would be the case — a person working on FAI is likely to consider their work to be a matter of life and death, and would want all the speed increases they could get, but an AGI researcher may feel the same way about the threat to their career and status posed by the possibility of someone else getting to AGI first. And if uploading is very expensive at first, it’ll only be the most well-funded AGI researchers (i.e. not SIAI and friends) who will have access to it early on and will be likely to attempt it (if it provides enough of a speed increase that they’d consider it to be worth it).
(I originally thought that uploading would be of little to no help in increasing one’s own intelligence (in ways aside from thinking the same way but faster), since an emulation of a brain isn’t automatically any more comprehensible than an actual brain, but now I can see a few ways it could help — the equivalent of any kind of brain surgery could be attempted quickly, freely, and reversibly, and the same could be said for experimenting with nootropic-type effects within the emulation. So it’s possible that uploaded people would get somewhat smarter and not just faster. Of course, that’s only soft self-improvement, nowhere near the ability to systematically change one’s cognition at the algorithmic level, so I’m not worried about an upload bootstrapping itself to superintelligence (as some people apparently are). Which is good, since humans are not Friendly.)
There’s a lot to respond to here. Some quick points:
It should be born in mind that greatly increased speed and memory may by themselves strongly affect a thinking entity. I imagine that if I could think a million times as fast I would think a lot more carefully about my interactions with the outside world than I do now.
I don’t see any reason to think that SIAI will continue to be the only group thinking about safety considerations. If nothing else, SIAI or FHI can raise awareness of the dangers of AI within the community of AI researchers.
Assuming that brain uploads precede superhuman artificial intelligence, it would obviously be very desirable to have the right sort of human uploaded first.
I presently have a very dim view as to the prospects for modern day humans developing Friendly AI. This skepticism is the main reason why I think that pursuing whole-brain emulations first is more promising. See the comment by Carl that I mentioned in response to Vladimir Nesov’s question. Of course, my attitude on this point is subject to change with incoming evidence.
Sped-up ems have slower computers relative to their thinking speed. If Moore’s Law of Mad Science means that increasing computing power allows researchers to build AI with less understanding (and thus more risk of UFAI), then a speedup of researchers relative to computing speed makes it more likely that the first non-WBE AIs will be the result of a theory-intensive approach with high understanding. Anders Sandberg of FHI and I are working on a paper exploring some of these issues.
This argument lowers the estimate of danger, but AIs developed on relatively slow computers are not necessarily theory-intense, could also be coding-intense, which leads to UFAI. And theory-intense doesn’t necessarily imply adequate concern about AI’s preference.
Do you mean that the role of ems is in developing FAI faster (as opposed to biological-human-built FAI), or are you thinking of something else? If ems merely speed time up, they don’t change the shape of FAI challenge much, unless (and to the extent that) we leverage them in a way we can’t for the human society to reduce existential risk before FAI is complete (but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI).
That’s the main thing that’s worried me about the possibility of ems coming first. But it depends on who is able to upload and who wants to, I suppose. If an average FAI researcher is more likely to upload, increase their speed, and possibly make copies of themselves than an average non-FAI AGI researcher, then it seems like that would be a reduction in risk.
I’m not sure whether that would be the case — a person working on FAI is likely to consider their work to be a matter of life and death, and would want all the speed increases they could get, but an AGI researcher may feel the same way about the threat to their career and status posed by the possibility of someone else getting to AGI first. And if uploading is very expensive at first, it’ll only be the most well-funded AGI researchers (i.e. not SIAI and friends) who will have access to it early on and will be likely to attempt it (if it provides enough of a speed increase that they’d consider it to be worth it).
(I originally thought that uploading would be of little to no help in increasing one’s own intelligence (in ways aside from thinking the same way but faster), since an emulation of a brain isn’t automatically any more comprehensible than an actual brain, but now I can see a few ways it could help — the equivalent of any kind of brain surgery could be attempted quickly, freely, and reversibly, and the same could be said for experimenting with nootropic-type effects within the emulation. So it’s possible that uploaded people would get somewhat smarter and not just faster. Of course, that’s only soft self-improvement, nowhere near the ability to systematically change one’s cognition at the algorithmic level, so I’m not worried about an upload bootstrapping itself to superintelligence (as some people apparently are). Which is good, since humans are not Friendly.)
There’s a lot to respond to here. Some quick points:
It should be born in mind that greatly increased speed and memory may by themselves strongly affect a thinking entity. I imagine that if I could think a million times as fast I would think a lot more carefully about my interactions with the outside world than I do now.
I don’t see any reason to think that SIAI will continue to be the only group thinking about safety considerations. If nothing else, SIAI or FHI can raise awareness of the dangers of AI within the community of AI researchers.
Assuming that brain uploads precede superhuman artificial intelligence, it would obviously be very desirable to have the right sort of human uploaded first.
I presently have a very dim view as to the prospects for modern day humans developing Friendly AI. This skepticism is the main reason why I think that pursuing whole-brain emulations first is more promising. See the comment by Carl that I mentioned in response to Vladimir Nesov’s question. Of course, my attitude on this point is subject to change with incoming evidence.
Sped-up ems have slower computers relative to their thinking speed. If Moore’s Law of Mad Science means that increasing computing power allows researchers to build AI with less understanding (and thus more risk of UFAI), then a speedup of researchers relative to computing speed makes it more likely that the first non-WBE AIs will be the result of a theory-intensive approach with high understanding. Anders Sandberg of FHI and I are working on a paper exploring some of these issues.
This argument lowers the estimate of danger, but AIs developed on relatively slow computers are not necessarily theory-intense, could also be coding-intense, which leads to UFAI. And theory-intense doesn’t necessarily imply adequate concern about AI’s preference.
My idea here is the same as the one that Carl Shulman mentioned in a response to one of your comments from nine months ago.