In seeking to prevent such outcomes, you should focus much more on the technology than on the psychology, because the technology is the essential ingredient in these end-of-the-world scenarios and the specific psychology you describe is not an essential ingredient. Suppose there is a type of nanoreplicator which could destroy all life on Earth. Yes, it might be created and released by a suitably empowered angry person; but it might also be released for some other reason, or even just as an accident.
Sometimes this scenario comes up because someone has been imagining a world where everyone has their own desktop nanofactory, and then they suddenly think, what about the sociopaths? If anyone can make anything, that means anyone can make a WMD, which means a small minority will make and use WMDs—etc. But this just means that the scenario of “first everyone gets a nanofactory, then we worry about someone choosing to end the world” is never going to happen. The possibility of human extinction has been part of the nanotech concept from almost the beginning. This was diluted once you had people hoping to get rich by marketing nanotech, and further still once “nanotech” just became a sexy new name for “chemistry”, but the feeling of peril has always hovered over the specific concept of replicating nanomachines, especially free-living ones, and any person or organization who begins to seriously make progress in that direction will surely know they are playing with fire.
There simply never will be a society of free wild-type humans with lasting open access to advanced nanotechnology. It’s like giving a box of matches to every child in a kindergarten, the place would burn down very quickly. And maybe that is where we’re headed anyway, not because some insane idiot really will give everyone on earth a desktop WMD-factory, but because the knowledge is springing up in too many places at once.
Ordinary monitoring and intervention (as carried out by the state) can’t be more than a temporary tactic—it might work for a period of years, but it’s not a solution that can define a civilization’s long-term response to the challenge of nanotechnology, because in the long run there are just too many ways in which the deadly threat might materialize—designed in secret by a distributed process, manufactured in a similar way.
As with Friendly AI, the core of the long-term solution is to have people (and other intelligent agents) who want to not end the world in this way—so “psychology” matters after all—but we are talking about a seriously posthuman world order then, with a neurotechnocracy which studies your brain deeply before you are given access to civilization’s higher powers, or a ubiquitous AI environment which invasively studies and monitors the value systems and real-time plans of every intelligent being. You’re a transhumanist, so perhaps you can deal with such scenarios, but all of them are on the other side of a singularity and cannot possibly define a practical political or technical pre-singularity strategy for overcoming this challenge. They are not designed for a world in which people are still people and in which they possess the cognitive privacy, autonomy, and idiosyncrasy that they naturally have, and in which there are no other types of intelligent actor on the scene. Any halfway-successful approach for forestalling nanotechnological (and related) doomsdays in that world will have to be a tactical approach (again, tactical means that we don’t care about it being a very long-term solution, it’s just crisis management, a holding pattern) which focuses first on the specificities of the technology (what exactly would make it so dangerous, how can that be neutralized), and only secondarily on social and psychological factors behind its potential misuse.
In seeking to prevent such outcomes, you should focus much more on the technology than on the psychology, because the technology is the essential ingredient in these end-of-the-world scenarios and the specific psychology you describe is not an essential ingredient. Suppose there is a type of nanoreplicator which could destroy all life on Earth. Yes, it might be created and released by a suitably empowered angry person; but it might also be released for some other reason, or even just as an accident.
Sometimes this scenario comes up because someone has been imagining a world where everyone has their own desktop nanofactory, and then they suddenly think, what about the sociopaths? If anyone can make anything, that means anyone can make a WMD, which means a small minority will make and use WMDs—etc. But this just means that the scenario of “first everyone gets a nanofactory, then we worry about someone choosing to end the world” is never going to happen. The possibility of human extinction has been part of the nanotech concept from almost the beginning. This was diluted once you had people hoping to get rich by marketing nanotech, and further still once “nanotech” just became a sexy new name for “chemistry”, but the feeling of peril has always hovered over the specific concept of replicating nanomachines, especially free-living ones, and any person or organization who begins to seriously make progress in that direction will surely know they are playing with fire.
There simply never will be a society of free wild-type humans with lasting open access to advanced nanotechnology. It’s like giving a box of matches to every child in a kindergarten, the place would burn down very quickly. And maybe that is where we’re headed anyway, not because some insane idiot really will give everyone on earth a desktop WMD-factory, but because the knowledge is springing up in too many places at once.
Ordinary monitoring and intervention (as carried out by the state) can’t be more than a temporary tactic—it might work for a period of years, but it’s not a solution that can define a civilization’s long-term response to the challenge of nanotechnology, because in the long run there are just too many ways in which the deadly threat might materialize—designed in secret by a distributed process, manufactured in a similar way.
As with Friendly AI, the core of the long-term solution is to have people (and other intelligent agents) who want to not end the world in this way—so “psychology” matters after all—but we are talking about a seriously posthuman world order then, with a neurotechnocracy which studies your brain deeply before you are given access to civilization’s higher powers, or a ubiquitous AI environment which invasively studies and monitors the value systems and real-time plans of every intelligent being. You’re a transhumanist, so perhaps you can deal with such scenarios, but all of them are on the other side of a singularity and cannot possibly define a practical political or technical pre-singularity strategy for overcoming this challenge. They are not designed for a world in which people are still people and in which they possess the cognitive privacy, autonomy, and idiosyncrasy that they naturally have, and in which there are no other types of intelligent actor on the scene. Any halfway-successful approach for forestalling nanotechnological (and related) doomsdays in that world will have to be a tactical approach (again, tactical means that we don’t care about it being a very long-term solution, it’s just crisis management, a holding pattern) which focuses first on the specificities of the technology (what exactly would make it so dangerous, how can that be neutralized), and only secondarily on social and psychological factors behind its potential misuse.