That’s why I threw in the disclaimer about needing some theory of self/identity. Possible future Phil’s must bear a special relationship to the current Phil, which is not shared by all other future people—or else you lose egoism altogether when speaking about the future.
There are certainly some well thought out arguments that when thinking about your possible future, you’re thinking about an entirely different person, or a variety of different possible people. But the more you go down that road, the less clear it is that classical decision theory has any rational claim on what you ought to do. The Ramsey/Von Neumann-Morgenstern framework tacitly requires that when a person acts so has to maximize his expected utility, he does so with the assumption that he his actually maximizing HIS expected utility, not someone else’s.
This framework only makes sense if each possible person over which the utility function is defined is the agents future self, not another agent altogether. There needs to be some logical or physical relationship between the current agent and the class of future possible agent’s such that their self/identity is maintained.
The less clear that the identity is maintained, the less clear that there is a rational maxim that the agent should maximize the future agent’s utility...which among other things, is a philosopher’s explanation for why we discount future value when performing actions, beyond what you get from simple time value of money.
So you still have the problem that the utility is, for instance, defined over all possible future Phil’s utilities, not over all possible future people’s. Possible Phil’s are among the class of possible people (i presume), but not vice versa. So there is no logical relationship that a process that holds for possible phil’s holds for possible future people.
You’re right that insofar as the utility function of the my future self is the same as my current utility function, I should want to maximize the utility of my future self. But my point with that statement is precisely that one’s future self can have very different interests that one’s current self, as you said (hence, the heroin addict example EDIT: Just realized I deleted that from the prior post! Put back in at the bottom of this one!).
Many (or arguably most) actions we perform can be explained (rationally) only in terms of future benefits. Insofar as my future self just is me, there’s no problem at all. It is MY present actions are maximizing MY utility (where actions are present, and utility not necessarily indexed by time, and if it is indexed by time, not by reference to present and future selves, just to ME). I take something like that to be the everyday view of things. There is only one utility function, though it might evolve over time
(the evolution brings about its own complexities. If a 15 year old who dislikes wine is offered a $50,000 bottle of wine for $10, to be given to him when he is 30, should he buy the wine? Taking a shortsighted look, he should turn it down. But if he knows by age 30 he’s going to be a wine connoisseur, maybe he should buy it after all cause it’s a great deal).
However, on the view brought up by Phil, that an expected utility function is definied over many different future selves, who just are many different people, you have to make things more complicated (or at the very least, we’re on the edge of having to complicate things). Some people will argue that John age 18, John age 30, and John age 50 are three completely different people. On this view, it is not clear that John age 18 rationally ought to perform actions that will make the lives of Johns age 30⁄50 better (at little detriment to his present day). On the extreme view, John’s desire to have a good job at age 30 does not provide a reason to go college—because John 18 will never be 30, some other guy will reap the benefits (admittedly, John likely receives some utility from the deceived view that he is progressing toward his goal; but then progression, not the goal itself, is the end that rationalizes his actions). Unless you establish an utilitarian or altruistic rational norm, etc., the principles of reason do not straightforwardly tell us to maximize other peoples utilities.
The logic naturally breaks apart even more when we talk about many possible John age 30s, all of whom live different quite different lives and are not the same agent at all as John age 18. It really breaks down if John age 18 + 1 second is not the same as John age 18. (On a short time scale, very few actions, if any, derive immediate utility. e.g. I flip the light swich to turn on the light, but there is at least a milisecond between performing the basic action and the desired effect occurring).
Which is why, if many of our actions are to make rational sense, an agent’s identity has to be maintained through time...at least in some manner. And that’s all I really wanted establish, so as to show that the utilities in an expected utility calculation are still indexed to an individual, not a collection of people that have nothing to do with each other (maybe John1, John2, etc are slightly different—but not so much as John1 and Michael are). However, if someone wants to take the view that John age 18 and John age 18 + 1s are as different as John and Michael, I admittedly can’t prove that someone wrong.
EDIT: Heroin example: sorry for any confusion
you are having surgery tomorrow. There’s a 50% chance that (a) you will wake up with no regard for former interests and relationships, and hopelessly addicted to heroin. There’s a 50% chance that (b) you will wake up with no major change to your personality. You know that in (a) you’ll be really happy if you come home from surgery to a pile full of heroin. And in (b) if you come home and remember that you wasted your life savings on heroin, you will only be mildly upset.
In order to maximize the expected utility of the guy who’s going to come out of surgery, you should go out and buy all of the heroin you can (and maybe pay someone to prevent you from ODing). But it’s by no means clear that you rationally ought to do this. You are trying to maximize your utility. Insofar as you question whether or not the heroin addict in (a) counts as yourself, you should minimize the importance of his fate in your expected utility calculation. Standing here today, I don’t care what that guy’s life is like, even if it is my physical body. I would rather make the utility of myself in (b) slightly higher, even at the risk of making the utility of the person in (a) significantly lower.