I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
I would bet against that. (For reasons including but not limited to the observation that sentience exists in the real world too.)
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.