I want to talk about human intelligence amplification (IA), including things like brain-machine interfaces, brain/CNS mods, and perhaps eventually brute-force uploading/simulation. There are parallels between the dangers of AI and IA.
IA powerful enough to be or create an x-risk might be created before AGI. (E.g., successful IA might jump-start AGI development.) IA is likely to be created without a complete understanding of the human brain, because the task is just to modify existing brains, not to design one from scratch. We will then need FIA—the IA equivalent of Friendliness theory. When a human self-modifies using IA, how do we ensure value stability?
Are there organizations, forums, etc. dedicated to building FIA the way SIAI etc. are dedicated to building FAI?
Reposted from here hoping more people will read and respond.
IA is likely to go up in big steps, but IA FOOM makes even less sense than for AI because of the human in the loop. Also, it would probably give humans a lot of improvements before solving the problem of low number of independent simultaneous attention threads. So it is not clear that any IA direction would produce a situation of single unstoppable entity.
If IA simply greatly increases the thinking power of a thousand people by different amount, I would not be sure that medium-term existential threat of this field is greater than overall short-term existential threat created by something existing here and now like Sony...
My sense is that explicit technological modifications of humans are already heavily concerned with the question “Will we still be human after this modification” which at least gestures at the problems you identify. It is exact the lack of this type of concern in the AI field that motivates SIAI’s Friendliness activism. But the sorts of technological advances you are pointing towards seem more likely to arise in part from medical researcher methodologies, which seem more concerned with potential negative psychological and sociological effects than some other forms of technological research.
In short, if every AI researcher was already worried about safety to the extent that medical researchers seem to be worried, then there would be no need for SIAI to exist—all AI researchers worrying about the Friendliness problem is what winning looks like for SIAI. Since medical researchers are already worried about these types of problems, an SIAI-equivalent is not necessary. Consider all the different medical ethics councils—which are much more powerful than their institutional equivalents in AI research.
I want to talk about human intelligence amplification (IA), including things like brain-machine interfaces, brain/CNS mods, and perhaps eventually brute-force uploading/simulation. There are parallels between the dangers of AI and IA.
IA powerful enough to be or create an x-risk might be created before AGI. (E.g., successful IA might jump-start AGI development.) IA is likely to be created without a complete understanding of the human brain, because the task is just to modify existing brains, not to design one from scratch. We will then need FIA—the IA equivalent of Friendliness theory. When a human self-modifies using IA, how do we ensure value stability?
Are there organizations, forums, etc. dedicated to building FIA the way SIAI etc. are dedicated to building FAI?
Reposted from here hoping more people will read and respond.
IA is likely to go up in big steps, but IA FOOM makes even less sense than for AI because of the human in the loop. Also, it would probably give humans a lot of improvements before solving the problem of low number of independent simultaneous attention threads. So it is not clear that any IA direction would produce a situation of single unstoppable entity.
If IA simply greatly increases the thinking power of a thousand people by different amount, I would not be sure that medium-term existential threat of this field is greater than overall short-term existential threat created by something existing here and now like Sony...
My sense is that explicit technological modifications of humans are already heavily concerned with the question “Will we still be human after this modification” which at least gestures at the problems you identify. It is exact the lack of this type of concern in the AI field that motivates SIAI’s Friendliness activism. But the sorts of technological advances you are pointing towards seem more likely to arise in part from medical researcher methodologies, which seem more concerned with potential negative psychological and sociological effects than some other forms of technological research.
In short, if every AI researcher was already worried about safety to the extent that medical researchers seem to be worried, then there would be no need for SIAI to exist—all AI researchers worrying about the Friendliness problem is what winning looks like for SIAI. Since medical researchers are already worried about these types of problems, an SIAI-equivalent is not necessary. Consider all the different medical ethics councils—which are much more powerful than their institutional equivalents in AI research.