How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I’m not sure to what extent I would be glad some creature was happy, or to what extent I’d be mad at it for killing everyone else, though.
How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I’m not sure to what extent I would be glad some creature was happy, or to what extent I’d be mad at it for killing everyone else, though.