I thinks that’s all rather unnecessary. The only reason we don’t like people to die is because of the continuous experience they enjoy. It’s a consistent causal network we don’t want dying on us. I’ve gathered from this that the AI would be producing models with enough causal complexity to match actual sentience (not saying “I am conscious” just because the AI hears that a lot). I think that, if it’s only calling a given person-model to discover answers to questions, the thing isn’t really feeling for long enough periods of time to mind whether it goes away. Also, for the predicate to be tested I imagine the model would have to be created first and at that point it’s too late!
You don’t want the AI to use a sentient model to find out whether a certain action leads to a thousand years of pain and misery. Or even a couple of hours. Or minutes.
I thinks that’s all rather unnecessary. The only reason we don’t like people to die is because of the continuous experience they enjoy. It’s a consistent causal network we don’t want dying on us. I’ve gathered from this that the AI would be producing models with enough causal complexity to match actual sentience (not saying “I am conscious” just because the AI hears that a lot). I think that, if it’s only calling a given person-model to discover answers to questions, the thing isn’t really feeling for long enough periods of time to mind whether it goes away. Also, for the predicate to be tested I imagine the model would have to be created first and at that point it’s too late!
You don’t want the AI to use a sentient model to find out whether a certain action leads to a thousand years of pain and misery. Or even a couple of hours. Or minutes.