I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.
Yes, that ‘have’ vs ‘can have’ distinction changes everything—but most people are less picky with general claims than I.
I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.
Food for thought. I wouldn’t rule this out as a possibility and certainly the proportion of ‘morally relevant’ programs in this group skyrockets over the broader class. I’m not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
Creating sentience which isn’t much like the human model seems very difficult—I’m not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)
The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you’re a hard-core utilitarian.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.
Yes, that ‘have’ vs ‘can have’ distinction changes everything—but most people are less picky with general claims than I.
Food for thought. I wouldn’t rule this out as a possibility and certainly the proportion of ‘morally relevant’ programs in this group skyrockets over the broader class. I’m not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?
I think creating sentience is a much easier project than FAI, especially proven FAI. We’ve got plenty of examples of sentience.
Creating sentience which isn’t much like the human model seems very difficult—I’m not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?
This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)
The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.
I didn’t think about the case of a sentient UFAI—I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you’re a hard-core utilitarian.
The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a “Self Sentience Satisficer”. Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.
That goes also for valuing real world paperclips, perhaps even more so. World → real function that gives number of paperclips may even be trickier than valuing own sentience.
I would bet against that. (For reasons including but not limited to the observation that sentience exists in the real world too.)
For the AI to be dangerously effective it still needs to be able to optimize it’s own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it’s sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god’s eye view, give it full specs of the simulator, define what’s a paperclip, and it’ll maximize simulated paperclips there. I’ve impression that people mistake this—which doesn’t require solving any philosophical problems—for real world paperclip maximizer, which is much much trickier.