Not only there has to be UFAI risk, the FAI development must reduce the risk, which to me looks like the most shaky of the propositions. A buggy FAI that doesn’t break itself somehow is for certain unfriendly (e.g. it can want to euthanize you to end your suffering, or to cut apart your brain into 2 hemispheres to satisfy each hemisphere’s different desires, or something much more bizarre), while some random AI out of AI design space may e.g. typically wirehead everything except curiosity, and then it’d just keep us in a sort of wildlife preserve.
Note: try to avoid just world fallacy. Just because you work harder to make friendlier AI doesn’t necessarily make result friendlier. The universe doesn’t grade for effort.
We humans do a lot of wireheaded stuff. Fiction, art, MSG in food, porn, cosmetic implants… we aren’t doing it literally with a wire into the head, but we find creative ways to satisfy goals… maybe even the desire to build AI itself is a result of breaking the goal system. And the wireheadedness makes us tolerate other forms of life instead of setting on to exterminate everything as we already would have if we still pursued some reproductive goals.
Hell, we even wirehead our curiosity and quest for knowledge (see religions), entirely internally by breaking the goal system with self suggestion.
re: Benoit Mandelbrot and fractals, the fractals are way, way older. The actual study of them had to wait until computers.
Not only there has to be UFAI risk, the FAI development must reduce the risk, which to me looks like the most shaky of the propositions. A buggy FAI that doesn’t break itself somehow is for certain unfriendly (e.g. it can want to euthanize you to end your suffering, or to cut apart your brain into 2 hemispheres to satisfy each hemisphere’s different desires, or something much more bizarre), while some random AI out of AI design space may e.g. typically wirehead everything except curiosity, and then it’d just keep us in a sort of wildlife preserve.
Note: try to avoid just world fallacy. Just because you work harder to make friendlier AI doesn’t necessarily make result friendlier. The universe doesn’t grade for effort.
We humans do a lot of wireheaded stuff. Fiction, art, MSG in food, porn, cosmetic implants… we aren’t doing it literally with a wire into the head, but we find creative ways to satisfy goals… maybe even the desire to build AI itself is a result of breaking the goal system. And the wireheadedness makes us tolerate other forms of life instead of setting on to exterminate everything as we already would have if we still pursued some reproductive goals.
Hell, we even wirehead our curiosity and quest for knowledge (see religions), entirely internally by breaking the goal system with self suggestion.
re: Benoit Mandelbrot and fractals, the fractals are way, way older. The actual study of them had to wait until computers.