I am 100% utilitarian, but because others value me having my own preference, there is an isolated 0% sub-utility function that I can defer to for such times. In the presence of others, my utility function will perfectly match theirs. When alone, I am to advance and develop that zero-utility sub-function for those times when I’m confronted by agents that value my being myself. Of course, to truly do that, to be true to myself, this means that when I am alone, I am to work on the one thing that makes me the most happy: Maximizing the sum utilities of all agents. Any agent that values personality in me beyond perfect selflessness is rejecting my identity, but since my identity has no value to me, I can adopt whatever personality they value beyond selflessness.
In the presence of another 100% utilitarian agent, we will have to have a battle of values: There can be only one (perfectly selfless agent).
I think you only have a problem if everyone is a perfectly selfless agent. In fact, a room with many of you and one of me would not only be well defined, but probably be very useful according to my ethics.
Those are just “copies” of me; they’re already accounted for. But now you’ve got an entire room of me insisting you aren’t allowed to be 100% utilitarian. We have a secret method of detecting copies of us, which is why we’re singling you out. Also, we act differently so you don’t get freaked out by an obvious hive mind presence. That would just be creepy. Even by my standards. (Get it? “My” standards? Ah forget it...)
I am 100% utilitarian, but because others value me having my own preference, there is an isolated 0% sub-utility function that I can defer to for such times. In the presence of others, my utility function will perfectly match theirs. When alone, I am to advance and develop that zero-utility sub-function for those times when I’m confronted by agents that value my being myself. Of course, to truly do that, to be true to myself, this means that when I am alone, I am to work on the one thing that makes me the most happy: Maximizing the sum utilities of all agents. Any agent that values personality in me beyond perfect selflessness is rejecting my identity, but since my identity has no value to me, I can adopt whatever personality they value beyond selflessness.
In the presence of another 100% utilitarian agent, we will have to have a battle of values: There can be only one (perfectly selfless agent).
And I called dibs.
This is stupid.
Thank you for avoiding inferential silence.
I think you only have a problem if everyone is a perfectly selfless agent. In fact, a room with many of you and one of me would not only be well defined, but probably be very useful according to my ethics.
Those are just “copies” of me; they’re already accounted for. But now you’ve got an entire room of me insisting you aren’t allowed to be 100% utilitarian. We have a secret method of detecting copies of us, which is why we’re singling you out. Also, we act differently so you don’t get freaked out by an obvious hive mind presence. That would just be creepy. Even by my standards. (Get it? “My” standards? Ah forget it...)