Of course—my mistake. I meant that you can’t alter an agent’s desires by reason alone. You can’t appeal to desires you have. You can only appeal to its desires. So, when he’s going to turn the your blood iron into paperclips, and you want to live, you can’t try “But I want to live a long and happy life!”. If Clippy hasn’t got empathy, and you have nothing to offer that will help fulfill his own desires, then there’s nothing to be done, other than try to physical stop or kill him.
Maybe you’d be happier if you put him in a planet of his own, where a machine constantly destroye paperclips, and he was happy making new ones. My point is just that, if you do decide to make him happy, it’s not the optimal decision relative to a universal preference, or morality. It’s just the optimal decision relative to your desires. Is that ‘right’? Yes. That’s what we refer to, when we say ‘right’.
Of course—my mistake. I meant that you can’t alter an agent’s desires by reason alone. You can’t appeal to desires you have. You can only appeal to its desires. So, when he’s going to turn the your blood iron into paperclips, and you want to live, you can’t try “But I want to live a long and happy life!”. If Clippy hasn’t got empathy, and you have nothing to offer that will help fulfill his own desires, then there’s nothing to be done, other than try to physical stop or kill him.
Maybe you’d be happier if you put him in a planet of his own, where a machine constantly destroye paperclips, and he was happy making new ones. My point is just that, if you do decide to make him happy, it’s not the optimal decision relative to a universal preference, or morality. It’s just the optimal decision relative to your desires. Is that ‘right’? Yes. That’s what we refer to, when we say ‘right’.