What I’m saying is that: If Clippy tried to calculate our volition, he would conclude that our volition is immoral. (Probably. Maybe our volition IS paperclips.)
But if we programmed an AI to calculate our volition and use that as its volition, and our morality as its morality, and so on, then it would not find our volition immoral unless we find our volition immoral, which seems unlikely.
An AI that was smarter than us might deduce that we were not applying the Deep Structure of our morality properly because of bias or limited intelligence. It might
conclude that human morality requires humans to greatly reduce their numbers
in order to lessen the impact on other species, for instance.
What I’m saying is that: If Clippy tried to calculate our volition, he would conclude that our volition is immoral. (Probably. Maybe our volition IS paperclips.)
But if we programmed an AI to calculate our volition and use that as its volition, and our morality as its morality, and so on, then it would not find our volition immoral unless we find our volition immoral, which seems unlikely.
An AI that was smarter than us might deduce that we were not applying the Deep Structure of our morality properly because of bias or limited intelligence. It might conclude that human morality requires humans to greatly reduce their numbers in order to lessen the impact on other species, for instance.