I am not sure if I am answering your question, but:
a) if AI is trying to maximize X, and it has a possibility to do Y, then it matters whether AI believes that Y is X. For example an asteroid is going to hit the Earth, and it is not possible to completely avoid human deaths, which AI tries to avoid. But AI could scan all people and recreate them on another planet—is this the best solution (all human lives saved) or the worst one (all humans killed, copies created later)?
b) it’s not only about what AI believes, but human beliefs are also important, because they contribute to their happiness, and AI cares about human happiness. Should AI avoid doing things that according to its understanding are harmless (with some positive side-effects), but people believe that something wrong is done to them, and it makes them unhappy? In the example above, will the re-created people have nightmares about being copies (and unprotected from murder-and-copy by AI in case of another asteroid)?
My guess would be that some people would shrug and go on about their (recreated) life, some would grumble a bit first, a tiny minority would be so traumatized by the thought, that they would be unable to live and maybe even suicide, but on the whole, if the new life is not vastly different from the old one, it would be a non-event.
I agree with the point b), more or less. Note that the AI (let’s call it by its old name, God, shall we?) also has an option of not revealing what happened to the humans would be detrimental to their happiness.
I don’t follow… Can you give an example where beliefs would matter?
I am not sure if I am answering your question, but:
a) if AI is trying to maximize X, and it has a possibility to do Y, then it matters whether AI believes that Y is X. For example an asteroid is going to hit the Earth, and it is not possible to completely avoid human deaths, which AI tries to avoid. But AI could scan all people and recreate them on another planet—is this the best solution (all human lives saved) or the worst one (all humans killed, copies created later)?
b) it’s not only about what AI believes, but human beliefs are also important, because they contribute to their happiness, and AI cares about human happiness. Should AI avoid doing things that according to its understanding are harmless (with some positive side-effects), but people believe that something wrong is done to them, and it makes them unhappy? In the example above, will the re-created people have nightmares about being copies (and unprotected from murder-and-copy by AI in case of another asteroid)?
I sort of see your point now.
My guess would be that some people would shrug and go on about their (recreated) life, some would grumble a bit first, a tiny minority would be so traumatized by the thought, that they would be unable to live and maybe even suicide, but on the whole, if the new life is not vastly different from the old one, it would be a non-event.
I agree with the point b), more or less. Note that the AI (let’s call it by its old name, God, shall we?) also has an option of not revealing what happened to the humans would be detrimental to their happiness.