“Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans. Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals.”
Personally, I think that presents a very weak case for there being risk. It argues that there could be risk if we built these machines wrong, and the bad machines became powerful somehow. That is true—but the reader is inclined to respond “so what”. A dam can be dangerous if you build it wrong too. Such observations don’t say very much about the actual risk.
The core of:
http://singinst.org/riskintro/
...that talks about risk appears to be:
“Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans. Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals.”
Personally, I think that presents a very weak case for there being risk. It argues that there could be risk if we built these machines wrong, and the bad machines became powerful somehow. That is true—but the reader is inclined to respond “so what”. A dam can be dangerous if you build it wrong too. Such observations don’t say very much about the actual risk.