Relativity: people do not have experiences before or after each other.
On another planet, a happy civilization with our values existed billions of years ago. Over the course of its existence, there were quadrillions of people. Everything we do now is almost worthless by comparison, because they are all bringing down the average.
Oog, a member of the first sentient tribe, which has just come into existence, is going to be hit very hard with a club. Thag, a future member of the same tribe who does not yet exist, is going to be hit very hard with a club three times 100 years from now. If you can prevent one of these, it depends on the number of people who live in tribe between now and then, even though they would no longer exist.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
I’m waiting until I can capture my moral values in some (complicated) utility function. Until then, I’m refining my position.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
I was more thinking of people who needed to chose between benefiting our society and benefiting the ancient society, for whom this distinction would be relevant. I guess this is mathematically equivalent to 3.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
This brings back the original problem of killing people to bring up the average.
It feels odd to take an average over a possibly infinite future in this manner. It might work, but I feel like how well it matches our preferences will depend on the specifics of physics.
EDIT: This also implies that a world with 10 happy immortal people and 10 happy people who die is much worse than one with just 10 immortal people. Would you agree with that and all similar statements of that type implied by this solution?
I agree with that to some extent (as in I disagree, but replace both 10 with 10 trillion and I’d agree). But I’m still firming up my intuition at the moment.
There are so many problems with this.
Relativity: people do not have experiences before or after each other.
On another planet, a happy civilization with our values existed billions of years ago. Over the course of its existence, there were quadrillions of people. Everything we do now is almost worthless by comparison, because they are all bringing down the average.
Oog, a member of the first sentient tribe, which has just come into existence, is going to be hit very hard with a club. Thag, a future member of the same tribe who does not yet exist, is going to be hit very hard with a club three times 100 years from now. If you can prevent one of these, it depends on the number of people who live in tribe between now and then, even though they would no longer exist.
2 is irrelevant, because it doesn’t matter whether our current utility is in some absolute sense low, because we will still make the same decisions! U(A)=1 U(B)=100 gives the same outcome as U(A)=0.001 and U(B)=0.1.
1 and 3 can be solved by taking average utility over all agents, past and future. But that’s irrelevant to me because… I’m not an average utilitarian :-)
I’m waiting until I can capture my moral values in some (complicated) utility function. Until then, I’m refining my position.
I was more thinking of people who needed to chose between benefiting our society and benefiting the ancient society, for whom this distinction would be relevant. I guess this is mathematically equivalent to 3.
This brings back the original problem of killing people to bring up the average.
That can be dealt with in some usual way, by setting the utility of someone dead to zero but keeping them in the average.
It feels odd to take an average over a possibly infinite future in this manner. It might work, but I feel like how well it matches our preferences will depend on the specifics of physics.
EDIT: This also implies that a world with 10 happy immortal people and 10 happy people who die is much worse than one with just 10 immortal people. Would you agree with that and all similar statements of that type implied by this solution?
I agree with that to some extent (as in I disagree, but replace both 10 with 10 trillion and I’d agree). But I’m still firming up my intuition at the moment.