This is back to the original argument, and not on the definition of expected utility functions or the status of utilitiarianism in general.
PhilGoetz’s argument appears to contain a contradiction similar to that which Moore discusses in Principia Ethica, where he argues that the principle egoism does not entail utilitarianism.
Egoism: X ought to do what maximizes X’s happiness. Utilitarianism: X ought to do what maximizes EVERYONE’s happiness
(or put Xo for X. and X_sub_x for Everyone).
X’s happiness is not logically equivalent to Everyone’s happiness. The important takeway here is that because happiness is indexed to an individual person (at least as defined in the egoistic principle), each person’s happiness is an independent logical term.
We have to broaden the scope of egoism slightly to include whatever concept of the utility function you use, and the discussion of possible selves. However, unless you have a pretty weird concept of self/identity, I don’t see why it wouldn’t work. In that situation, X’s future self in all possible worlds bears a relationship to X at time 0, such that future X’s happiness is independent of future Everyone’s happiness.
Anyway, using Von-Neumann Morgenstern doesn’t work here. There is no logical reason to believe that averaging possible states with regard to an individual’s utility has any implications for averaging happiness over many different individuals.
As addendum, neither average nor total utility provides a solution to the fairness, or justice, issue (i.e. how utility is distributed among people, which at least has some common sense gravity to it). Individual utility maximization more or less does not have to deal with that issue at all (their might be some issues with time-ordering of preferences, etc., but that’s not close to the same thing). That’s another sign Von-Neumann Morgenstern just doesn’t give an answer as to which ethical system is more rational.
There is no logical reason to believe that averaging possible states with regard to an individual’s utility has any implications for averaging happiness over many different individuals.
How is it different? Aren’t all of the different possible future yous different people? In both cases you are averaging utility over many different individuals. It’s just that in one case, all of them are copies of you.
That’s why I threw in the disclaimer about needing some theory of self/identity. Possible future Phil’s must bear a special relationship to the current Phil, which is not shared by all other future people—or else you lose egoism altogether when speaking about the future.
There are certainly some well thought out arguments that when thinking about your possible future, you’re thinking about an entirely different person, or a variety of different possible people. But the more you go down that road, the less clear it is that classical decision theory has any rational claim on what you ought to do. The Ramsey/Von Neumann-Morgenstern framework tacitly requires that when a person acts so has to maximize his expected utility, he does so with the assumption that he his actually maximizing HIS expected utility, not someone else’s.
This framework only makes sense if each possible person over which the utility function is defined is the agents future self, not another agent altogether. There needs to be some logical or physical relationship between the current agent and the class of future possible agent’s such that their self/identity is maintained.
The less clear that the identity is maintained, the less clear that there is a rational maxim that the agent should maximize the future agent’s utility...which among other things, is a philosopher’s explanation for why we discount future value when performing actions, beyond what you get from simple time value of money.
So you still have the problem that the utility is, for instance, defined over all possible future Phil’s utilities, not over all possible future people’s. Possible Phil’s are among the class of possible people (i presume), but not vice versa. So there is no logical relationship that a process that holds for possible phil’s holds for possible future people.
The Ramsey/Von Neumann-Morgenstern framework tacitly requires that when a person acts so has to maximize his expected utility, he does so with the assumption that he his actually maximizing HIS expected utility, not someone else’s.
Sure, and when you actually do the expected utility calculation, you hold the utility function constant, regardless of who specifically is theoretically acting. For example, I can maximize my expected utility by sabotaging a future evil self. To do this, I have to make an expected utility calculation involving a future self, but my speculative calculation does not incorporate his utility function (except possibly as useful information).
The less clear that the identity is maintained, the less clear that there is a rational maxim that the agent should maximize the future agent’s utility
This maxim isn’t at all clear to me to begin with. Maximizing your future self’s utility is not the same as maximizing your current self’s utility. The only time these are necessarily the same is when there is no difference in utility function between current and future self, but at that point you might as well just speak of your utility, period. If you and all your future selves possess the same utility function, you all by definition want exactly the same thing, so it makes no sense to talk about providing “more utility” to one future self than another. The decision you make carries exactly the same utility for all of you.
You’re right that insofar as the utility function of the my future self is the same as my current utility function, I should want to maximize the utility of my future self. But my point with that statement is precisely that one’s future self can have very different interests that one’s current self, as you said (hence, the heroin addict example EDIT: Just realized I deleted that from the prior post! Put back in at the bottom of this one!).
Many (or arguably most) actions we perform can be explained (rationally) only in terms of future benefits. Insofar as my future self just is me, there’s no problem at all. It is MY present actions are maximizing MY utility (where actions are present, and utility not necessarily indexed by time, and if it is indexed by time, not by reference to present and future selves, just to ME). I take something like that to be the everyday view of things. There is only one utility function, though it might evolve over time
(the evolution brings about its own complexities. If a 15 year old who dislikes wine is offered a $50,000 bottle of wine for $10, to be given to him when he is 30, should he buy the wine? Taking a shortsighted look, he should turn it down. But if he knows by age 30 he’s going to be a wine connoisseur, maybe he should buy it after all cause it’s a great deal).
However, on the view brought up by Phil, that an expected utility function is definied over many different future selves, who just are many different people, you have to make things more complicated (or at the very least, we’re on the edge of having to complicate things). Some people will argue that John age 18, John age 30, and John age 50 are three completely different people. On this view, it is not clear that John age 18 rationally ought to perform actions that will make the lives of Johns age 30⁄50 better (at little detriment to his present day). On the extreme view, John’s desire to have a good job at age 30 does not provide a reason to go college—because John 18 will never be 30, some other guy will reap the benefits (admittedly, John likely receives some utility from the deceived view that he is progressing toward his goal; but then progression, not the goal itself, is the end that rationalizes his actions). Unless you establish an utilitarian or altruistic rational norm, etc., the principles of reason do not straightforwardly tell us to maximize other peoples utilities.
The logic naturally breaks apart even more when we talk about many possible John age 30s, all of whom live different quite different lives and are not the same agent at all as John age 18. It really breaks down if John age 18 + 1 second is not the same as John age 18. (On a short time scale, very few actions, if any, derive immediate utility. e.g. I flip the light swich to turn on the light, but there is at least a milisecond between performing the basic action and the desired effect occurring).
Which is why, if many of our actions are to make rational sense, an agent’s identity has to be maintained through time...at least in some manner. And that’s all I really wanted establish, so as to show that the utilities in an expected utility calculation are still indexed to an individual, not a collection of people that have nothing to do with each other (maybe John1, John2, etc are slightly different—but not so much as John1 and Michael are). However, if someone wants to take the view that John age 18 and John age 18 + 1s are as different as John and Michael, I admittedly can’t prove that someone wrong.
EDIT: Heroin example: sorry for any confusion
you are having surgery tomorrow. There’s a 50% chance that (a) you will wake up with no regard for former interests and relationships, and hopelessly addicted to heroin. There’s a 50% chance that (b) you will wake up with no major change to your personality. You know that in (a) you’ll be really happy if you come home from surgery to a pile full of heroin. And in (b) if you come home and remember that you wasted your life savings on heroin, you will only be mildly upset.
In order to maximize the expected utility of the guy who’s going to come out of surgery, you should go out and buy all of the heroin you can (and maybe pay someone to prevent you from ODing). But it’s by no means clear that you rationally ought to do this. You are trying to maximize your utility. Insofar as you question whether or not the heroin addict in (a) counts as yourself, you should minimize the importance of his fate in your expected utility calculation. Standing here today, I don’t care what that guy’s life is like, even if it is my physical body. I would rather make the utility of myself in (b) slightly higher, even at the risk of making the utility of the person in (a) significantly lower.
Many (or arguably most) actions we perform can be explained (rationally) only in terms of future benefits.
Mostly true, but Newcomb-like problems can muddy this distinction.
There is only one utility function, though it might evolve over time
No, it can’t. If the same utility function can “evolve over time”, it’s got type (Time → Outcome → Utilons), but a utility function just has type (Outcome → Utilons).
Unless you establish an utilitarian or altruistic rational norm, etc., the principles of reason do not straightforwardly tell us to maximize other peoples utilities.
Agreed. The same principle applies to the utility of future selves.
It really breaks down if John age 18 + 1 second is not the same as John age 18.
No, it really doesn’t. John age 18 has a utility function that involves John age 18 + 1 second, who probably has a similar utility function. Flipping the light grants both of them utility.
Insofar as you question whether or not the heroin addict in (a) counts as yourself, you should minimize the importance of his fate in your expected utility calculation.
I don’t see how this follows. The importance of the heroin addict in my expected utility calculation reflects my values. Identity is (possibly) just another factor to consider, but it has no intrinsic special privilege.
I would rather make the utility of myself in (b) slightly higher, even at the risk of making the utility of the person in (a) significantly lower.
That may be, but your use of the word “utility” here is confusing the issue. The statement “I would rather” is your utility function. When you speak of “making the utiity of (b) slightly higher”, then I think you can only be doing so because “he agrees with me on most everything, so I’m actually just directly increasing my own utility” or because “I’m arbitrarily dedicating X% of my utility function to his values, whatever they are”.
This is back to the original argument, and not on the definition of expected utility functions or the status of utilitiarianism in general.
PhilGoetz’s argument appears to contain a contradiction similar to that which Moore discusses in Principia Ethica, where he argues that the principle egoism does not entail utilitarianism.
Egoism: X ought to do what maximizes X’s happiness.
Utilitarianism: X ought to do what maximizes EVERYONE’s happiness
(or put Xo for X. and X_sub_x for Everyone).
X’s happiness is not logically equivalent to Everyone’s happiness. The important takeway here is that because happiness is indexed to an individual person (at least as defined in the egoistic principle), each person’s happiness is an independent logical term.
We have to broaden the scope of egoism slightly to include whatever concept of the utility function you use, and the discussion of possible selves. However, unless you have a pretty weird concept of self/identity, I don’t see why it wouldn’t work. In that situation, X’s future self in all possible worlds bears a relationship to X at time 0, such that future X’s happiness is independent of future Everyone’s happiness.
Anyway, using Von-Neumann Morgenstern doesn’t work here. There is no logical reason to believe that averaging possible states with regard to an individual’s utility has any implications for averaging happiness over many different individuals.
As addendum, neither average nor total utility provides a solution to the fairness, or justice, issue (i.e. how utility is distributed among people, which at least has some common sense gravity to it). Individual utility maximization more or less does not have to deal with that issue at all (their might be some issues with time-ordering of preferences, etc., but that’s not close to the same thing). That’s another sign Von-Neumann Morgenstern just doesn’t give an answer as to which ethical system is more rational.
How is it different? Aren’t all of the different possible future yous different people? In both cases you are averaging utility over many different individuals. It’s just that in one case, all of them are copies of you.
That’s why I threw in the disclaimer about needing some theory of self/identity. Possible future Phil’s must bear a special relationship to the current Phil, which is not shared by all other future people—or else you lose egoism altogether when speaking about the future.
There are certainly some well thought out arguments that when thinking about your possible future, you’re thinking about an entirely different person, or a variety of different possible people. But the more you go down that road, the less clear it is that classical decision theory has any rational claim on what you ought to do. The Ramsey/Von Neumann-Morgenstern framework tacitly requires that when a person acts so has to maximize his expected utility, he does so with the assumption that he his actually maximizing HIS expected utility, not someone else’s.
This framework only makes sense if each possible person over which the utility function is defined is the agents future self, not another agent altogether. There needs to be some logical or physical relationship between the current agent and the class of future possible agent’s such that their self/identity is maintained.
The less clear that the identity is maintained, the less clear that there is a rational maxim that the agent should maximize the future agent’s utility...which among other things, is a philosopher’s explanation for why we discount future value when performing actions, beyond what you get from simple time value of money.
So you still have the problem that the utility is, for instance, defined over all possible future Phil’s utilities, not over all possible future people’s. Possible Phil’s are among the class of possible people (i presume), but not vice versa. So there is no logical relationship that a process that holds for possible phil’s holds for possible future people.
Sure, and when you actually do the expected utility calculation, you hold the utility function constant, regardless of who specifically is theoretically acting. For example, I can maximize my expected utility by sabotaging a future evil self. To do this, I have to make an expected utility calculation involving a future self, but my speculative calculation does not incorporate his utility function (except possibly as useful information).
This maxim isn’t at all clear to me to begin with. Maximizing your future self’s utility is not the same as maximizing your current self’s utility. The only time these are necessarily the same is when there is no difference in utility function between current and future self, but at that point you might as well just speak of your utility, period. If you and all your future selves possess the same utility function, you all by definition want exactly the same thing, so it makes no sense to talk about providing “more utility” to one future self than another. The decision you make carries exactly the same utility for all of you.
You’re right that insofar as the utility function of the my future self is the same as my current utility function, I should want to maximize the utility of my future self. But my point with that statement is precisely that one’s future self can have very different interests that one’s current self, as you said (hence, the heroin addict example EDIT: Just realized I deleted that from the prior post! Put back in at the bottom of this one!).
Many (or arguably most) actions we perform can be explained (rationally) only in terms of future benefits. Insofar as my future self just is me, there’s no problem at all. It is MY present actions are maximizing MY utility (where actions are present, and utility not necessarily indexed by time, and if it is indexed by time, not by reference to present and future selves, just to ME). I take something like that to be the everyday view of things. There is only one utility function, though it might evolve over time
(the evolution brings about its own complexities. If a 15 year old who dislikes wine is offered a $50,000 bottle of wine for $10, to be given to him when he is 30, should he buy the wine? Taking a shortsighted look, he should turn it down. But if he knows by age 30 he’s going to be a wine connoisseur, maybe he should buy it after all cause it’s a great deal).
However, on the view brought up by Phil, that an expected utility function is definied over many different future selves, who just are many different people, you have to make things more complicated (or at the very least, we’re on the edge of having to complicate things). Some people will argue that John age 18, John age 30, and John age 50 are three completely different people. On this view, it is not clear that John age 18 rationally ought to perform actions that will make the lives of Johns age 30⁄50 better (at little detriment to his present day). On the extreme view, John’s desire to have a good job at age 30 does not provide a reason to go college—because John 18 will never be 30, some other guy will reap the benefits (admittedly, John likely receives some utility from the deceived view that he is progressing toward his goal; but then progression, not the goal itself, is the end that rationalizes his actions). Unless you establish an utilitarian or altruistic rational norm, etc., the principles of reason do not straightforwardly tell us to maximize other peoples utilities.
The logic naturally breaks apart even more when we talk about many possible John age 30s, all of whom live different quite different lives and are not the same agent at all as John age 18. It really breaks down if John age 18 + 1 second is not the same as John age 18. (On a short time scale, very few actions, if any, derive immediate utility. e.g. I flip the light swich to turn on the light, but there is at least a milisecond between performing the basic action and the desired effect occurring).
Which is why, if many of our actions are to make rational sense, an agent’s identity has to be maintained through time...at least in some manner. And that’s all I really wanted establish, so as to show that the utilities in an expected utility calculation are still indexed to an individual, not a collection of people that have nothing to do with each other (maybe John1, John2, etc are slightly different—but not so much as John1 and Michael are). However, if someone wants to take the view that John age 18 and John age 18 + 1s are as different as John and Michael, I admittedly can’t prove that someone wrong.
EDIT: Heroin example: sorry for any confusion
you are having surgery tomorrow. There’s a 50% chance that (a) you will wake up with no regard for former interests and relationships, and hopelessly addicted to heroin. There’s a 50% chance that (b) you will wake up with no major change to your personality. You know that in (a) you’ll be really happy if you come home from surgery to a pile full of heroin. And in (b) if you come home and remember that you wasted your life savings on heroin, you will only be mildly upset.
In order to maximize the expected utility of the guy who’s going to come out of surgery, you should go out and buy all of the heroin you can (and maybe pay someone to prevent you from ODing). But it’s by no means clear that you rationally ought to do this. You are trying to maximize your utility. Insofar as you question whether or not the heroin addict in (a) counts as yourself, you should minimize the importance of his fate in your expected utility calculation. Standing here today, I don’t care what that guy’s life is like, even if it is my physical body. I would rather make the utility of myself in (b) slightly higher, even at the risk of making the utility of the person in (a) significantly lower.
Mostly true, but Newcomb-like problems can muddy this distinction.
No, it can’t. If the same utility function can “evolve over time”, it’s got type (Time → Outcome → Utilons), but a utility function just has type (Outcome → Utilons).
Agreed. The same principle applies to the utility of future selves.
No, it really doesn’t. John age 18 has a utility function that involves John age 18 + 1 second, who probably has a similar utility function. Flipping the light grants both of them utility.
I don’t see how this follows. The importance of the heroin addict in my expected utility calculation reflects my values. Identity is (possibly) just another factor to consider, but it has no intrinsic special privilege.
That may be, but your use of the word “utility” here is confusing the issue. The statement “I would rather” is your utility function. When you speak of “making the utiity of (b) slightly higher”, then I think you can only be doing so because “he agrees with me on most everything, so I’m actually just directly increasing my own utility” or because “I’m arbitrarily dedicating X% of my utility function to his values, whatever they are”.