For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.