How would being moved by this argument help me achieve my values? I don’t see how it helps me to maximize an aggregate utility function for all possible agents. I don’t care intrinsically about Felix, nor is Felix capable of cooperating with me in any meaningful way.
How would being moved by this argument help me achieve my values? I don’t see how it helps me to maximize an aggregate utility function for all possible agents. I don’t care intrinsically about Felix, nor is Felix capable of cooperating with me in any meaningful way.
How does your aggregate utility function weigh agents? That seems to be what the argument is about.