Why do we think it’s reasonable to say that we should maximize average utility across all our possible future selves
Because that’s what we want, even if our future selves don’t. If I know I have a 50⁄50 chance of becoming a werewolf (permanently, to make things simple) and eating a bunch of tasty campers on the next full moon, then I can increase loqi’s expected utility by passing out silver bullets at the campsite ahead of time, at the expense of wereloqi’s utility.
In other words, one can attempt to improve one’s expected utility as defined by their current utility function by anticipating situations where they no longer implement said function.
I’m not asking questions about identity. I’m pointing out that almost everyone considers equitable distributions of utility better than inequitable distributions. So why do we not consider equitable distributions of utility among our future selves to be better than inequitable distributions?
I’m pointing out that almost everyone considers equitable distributions of utility better than inequitable distributions.
If that is true, then that means their utility is a function of the distribution of others’ utility, and they will maximize their expected utility by maximizing the expected equity of others’ utility.
So why do we not consider equitable distributions of utility among our future selves to be better than inequitable distributions?
Is this the case? I don’t know how you reached this conclusion. Even if it is the case, I also don’t see how this is necessarily inconsistent unless one also makes the claim that they make no value distinction between future selves and other people.
I don’t consider equitable distributions of utility better than inequitable distributions. I consider fair distributions better then unfair ones, which is not quite the same thing.
Put it that way, the answer to the original question is simple: if my future selves are me, then I am entitled to be unfair to some of myself whenever in my sole judgment I have sufficient reason.
That’s a different question. That’s the sort of thing that a utility function incorporates; e.g., whether the system of distribution of rewards will encourage productivity.
If you say you don’t consider equitable distributions of utility better than inequitable distributions, you don’t get to specify which inequitable distributions can occur. You mean all inequitable distributions, including the ones in which the productive people get nothing and the parasites get everything.
Because that’s what we want, even if our future selves don’t. If I know I have a 50⁄50 chance of becoming a werewolf (permanently, to make things simple) and eating a bunch of tasty campers on the next full moon, then I can increase loqi’s expected utility by passing out silver bullets at the campsite ahead of time, at the expense of wereloqi’s utility.
In other words, one can attempt to improve one’s expected utility as defined by their current utility function by anticipating situations where they no longer implement said function.
I’m not asking questions about identity. I’m pointing out that almost everyone considers equitable distributions of utility better than inequitable distributions. So why do we not consider equitable distributions of utility among our future selves to be better than inequitable distributions?
If that is true, then that means their utility is a function of the distribution of others’ utility, and they will maximize their expected utility by maximizing the expected equity of others’ utility.
Is this the case? I don’t know how you reached this conclusion. Even if it is the case, I also don’t see how this is necessarily inconsistent unless one also makes the claim that they make no value distinction between future selves and other people.
I don’t consider equitable distributions of utility better than inequitable distributions. I consider fair distributions better then unfair ones, which is not quite the same thing.
Put it that way, the answer to the original question is simple: if my future selves are me, then I am entitled to be unfair to some of myself whenever in my sole judgment I have sufficient reason.
That’s a different question. That’s the sort of thing that a utility function incorporates; e.g., whether the system of distribution of rewards will encourage productivity.
If you say you don’t consider equitable distributions of utility better than inequitable distributions, you don’t get to specify which inequitable distributions can occur. You mean all inequitable distributions, including the ones in which the productive people get nothing and the parasites get everything.
What definition of “fair” are you using such that that isn’t a tautology?
Example: my belief that my neighbor’s money would yield more utility in my hands than his, doesn’t entitle me to steal it.
Do they? I don’t see this.