In a comment on that post, Kaj Sotala excerpts a section of Sotala and Yampolskiy (2015), “Responses to catastrophic AGI risk: a survey”. This excerpts contains links to many other relevant discussions:
“De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a ‘pure’ AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
“The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of ‘cyborg values’ distinct from ordinary human values [290].
“Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
“Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human.”
The sources in question from the above are:
de Garis H 2005 The Artilect War: Cosmists vs Terrans (Palm Springs, CA: ETC Publica-Tions)
Kurzweil, R. (2001). Response to Stephen Hawking. Kurzweil Accelerating Intelligence. September, 5.
Sotala K and Valpola H 2012 Coalescing minds Int. J. Machine Consciousness4 293–312
Here’s a relevant comment on that post from Carl Shulman, who notes that FHI has periodically looked into BCI in unpublished work: “I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term)”
That’s a pretty impressive list of resources! I hadn’t done a lot of research on the topic beforehand, but I’ll definitely look into these when expanding on the arguments in this post.
Especially the Shulman comment in 4. seems quite relevant, I wonder why FHI hasn’t published their work on it (too much of a hassle to format & prepare?)
Thanks for collecting these things! I have been looking into these arguments recently myself, and here are some more relevant things:
EA forum post “A New X-Risk Factor: Brain-Computer Interfaces” (August 2020) argues for BCI as a risk factor for totalitarian lock-in.
In a comment on that post, Kaj Sotala excerpts a section of Sotala and Yampolskiy (2015), “Responses to catastrophic AGI risk: a survey”. This excerpts contains links to many other relevant discussions:
“De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a ‘pure’ AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
“The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of ‘cyborg values’ distinct from ordinary human values [290].
“Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
“Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human.”
The sources in question from the above are:
de Garis H 2005 The Artilect War: Cosmists vs Terrans (Palm Springs, CA: ETC Publica-Tions)
Kurzweil, R. (2001). Response to Stephen Hawking. Kurzweil Accelerating Intelligence. September, 5.
Sotala K and Valpola H 2012 Coalescing minds Int. J. Machine Consciousness 4 293–312
Warwick K 2003 Cyborg morals, cyborg values, cyborg ethics Ethics Inf. Technol. 5 131–7
Bostrom N 2004 The future of human evolution ed C Tandy pp 339–71 Two Hundred Years After Kant, Fifty Years After Turing (Death and Anti-Death vol 2)
Moravec H P 1992 Pigs in cyberspace www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1992/CyberPigs.html
Here’s a relevant comment on that post from Carl Shulman, who notes that FHI has periodically looked into BCI in unpublished work: “I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term)”
That’s a pretty impressive list of resources! I hadn’t done a lot of research on the topic beforehand, but I’ll definitely look into these when expanding on the arguments in this post.
Especially the Shulman comment in 4. seems quite relevant, I wonder why FHI hasn’t published their work on it (too much of a hassle to format & prepare?)