I don’t know, in terms of dystopia, I think that an AGI might decide to “phase us out” prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A “gentle good night” abolition of humanity that isn’t much worse (or way better) than what they had experienced for 50M years.
Releasing sterile attractive mates into a population is a good “low ecological impact” way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.
Probably true, but I agree with Peter Voss. I don’t think any malevolence is the most efficient use of the AGI’s time and resources. I think AGI has nothing to gain from malevolence. I don’t think the dystopia I posited is the most likely outcome of superintelligence. However, while we are on the subject of the forms a malevolent AGI might take, I do think this is the type of malevolence most likely to be allow the malevolent AGI to retain a positive self-image.
(Much the way environmentalists can feel better about introducing sterile males into crop-pest populations, and feel better about “solving the problem” without polluting the environment.)
Ted Kaczynski worried about this scenario a lot. …I’m not much like him in my views.
The most efficient use of time and resources will be to best accomplish the AI’s goals. If these goals are malovent or lethally I different, so will the AI’s actions. Unless these goals include maintaining a particular self image, the AI will have no seed to maintain any erroneous self image.
I don’t know, in terms of dystopia, I think that an AGI might decide to “phase us out” prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A “gentle good night” abolition of humanity that isn’t much worse (or way better) than what they had experienced for 50M years.
Releasing sterile attractive mates into a population is a good “low ecological impact” way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.
This is probably not the most efficient use of the AGI’s time and resources...
Probably true, but I agree with Peter Voss. I don’t think any malevolence is the most efficient use of the AGI’s time and resources. I think AGI has nothing to gain from malevolence. I don’t think the dystopia I posited is the most likely outcome of superintelligence. However, while we are on the subject of the forms a malevolent AGI might take, I do think this is the type of malevolence most likely to be allow the malevolent AGI to retain a positive self-image.
(Much the way environmentalists can feel better about introducing sterile males into crop-pest populations, and feel better about “solving the problem” without polluting the environment.)
Ted Kaczynski worried about this scenario a lot. …I’m not much like him in my views.
The most efficient use of time and resources will be to best accomplish the AI’s goals. If these goals are malovent or lethally I different, so will the AI’s actions. Unless these goals include maintaining a particular self image, the AI will have no seed to maintain any erroneous self image.