Potential solutions to foreseeable problems with biological superintelligence include: a) only upgrading particularly moral and trustworthy humans or b) ensuring that upgrading is widely accessible, so that lots of people can do it.
b) does not solve it without a lot of successful work on multipolar safety (it’s almost an equivalent of giving nuclear weapons to lots of people, making them widely accessible; and yes, giving gain-of-function labs equipment too)
a) is indeed very reasonable, but we should keep in mind that upgrade is a potentially stronger impact than any psychoactive drugs, a potentially stronger impact than any most radical psychedelic experiences. Here the usual “AI alignment problem” one is normally dealing with is replaced by the problem of conservation of one’s values and character.
In fact these problems are closely related. The most intractable part of AI safety is what happens when AI ecosystems starts to rapidly recursively self-improve, perhaps with significant acceleration. We might have current members of AI ecosystem behave in a reasonably safe and beneficial way, but would future members (or same members after they self-improve) behave safely, or would “a sharp left turn” happen?
Here it is the same problem for a rapidly improving and changing “enhanced human”, would that person continue to maintain the original character and values while undergoing radical changes and enhancements, or would drastic new realizations (potentially more radical than any psychedelic revelations) lead to unpredictable revisions of that original character and values?
It might be the case that it’s easier to smooth these changes for a human (compared to AI), but the success is not automatic by any means.
Potential solutions to foreseeable problems with biological superintelligence include: a) only upgrading particularly moral and trustworthy humans or b) ensuring that upgrading is widely accessible, so that lots of people can do it.
b) does not solve it without a lot of successful work on multipolar safety (it’s almost an equivalent of giving nuclear weapons to lots of people, making them widely accessible; and yes, giving gain-of-function labs equipment too)
a) is indeed very reasonable, but we should keep in mind that upgrade is a potentially stronger impact than any psychoactive drugs, a potentially stronger impact than any most radical psychedelic experiences. Here the usual “AI alignment problem” one is normally dealing with is replaced by the problem of conservation of one’s values and character.
In fact these problems are closely related. The most intractable part of AI safety is what happens when AI ecosystems starts to rapidly recursively self-improve, perhaps with significant acceleration. We might have current members of AI ecosystem behave in a reasonably safe and beneficial way, but would future members (or same members after they self-improve) behave safely, or would “a sharp left turn” happen?
Here it is the same problem for a rapidly improving and changing “enhanced human”, would that person continue to maintain the original character and values while undergoing radical changes and enhancements, or would drastic new realizations (potentially more radical than any psychedelic revelations) lead to unpredictable revisions of that original character and values?
It might be the case that it’s easier to smooth these changes for a human (compared to AI), but the success is not automatic by any means.