Imagine that reptiles developed the first AI. Imagine that they successfully designed it to maximize reptilian utility functions, faithfully, forever and ever, throughout the universe.
Now imagine that humans develop the first AI. Imagine that they successfully design it to maximize human utility functions, faithfully, forever and ever, throughout the universe.
In the long view, these two scenarios are nearly indistinguishable. The difference between them is smaller than the difference between bacteria and protozoa seems to us.
The human-centric sentiment in this post, which I’ve heard from many others thinking about the Singularity, reminds me why I sometimes think we would be safer rushing pell-mell into the Singularity, than stopping to think about it. The production of untamed AI could lead to many horrible scenarios; but if you want to be sure to screw things up, have a human think hard about it. (To be doubly sure, take a popular vote on it.)
Your initial reaction is probably that this is ridiculous; but that’s because you’re probably thinking of humans designing simple things, like cars or digital watches. Empirically, however, with large complex systems such as ecosystems and economies, humans have usually made things worse when they thought about the problem really hard and tried to make it better—especially when they made decisions through a political process. Religion, communism, monoculture crops, rent control, agricultural subsidies, foreign aid, the Biodome—I could go on. We humans only attain the ability to perform as-good-as-random when intervening in complex systems after centuries of painful experience. And we get only one shot at the Singularity.
(The complex system I speak of is not the AI, but the evolution of phenomena such as qualia, emotions, consciousness, and values.)
Imagine that reptiles developed the first AI. Imagine that they successfully designed it to maximize reptilian utility functions, faithfully, forever and ever, throughout the universe.
Now imagine that humans develop the first AI. Imagine that they successfully design it to maximize human utility functions, faithfully, forever and ever, throughout the universe.
In the long view, these two scenarios are nearly indistinguishable. The difference between them is smaller than the difference between bacteria and protozoa seems to us.
The human-centric sentiment in this post, which I’ve heard from many others thinking about the Singularity, reminds me why I sometimes think we would be safer rushing pell-mell into the Singularity, than stopping to think about it. The production of untamed AI could lead to many horrible scenarios; but if you want to be sure to screw things up, have a human think hard about it. (To be doubly sure, take a popular vote on it.)
Your initial reaction is probably that this is ridiculous; but that’s because you’re probably thinking of humans designing simple things, like cars or digital watches. Empirically, however, with large complex systems such as ecosystems and economies, humans have usually made things worse when they thought about the problem really hard and tried to make it better—especially when they made decisions through a political process. Religion, communism, monoculture crops, rent control, agricultural subsidies, foreign aid, the Biodome—I could go on. We humans only attain the ability to perform as-good-as-random when intervening in complex systems after centuries of painful experience. And we get only one shot at the Singularity.
(The complex system I speak of is not the AI, but the evolution of phenomena such as qualia, emotions, consciousness, and values.)