My point is whatever you think of as utility, I can apply a positive monotonic transformation to it, maximize the expectation of that transformation, and this will still be rational (in the sense of complying with the Savage axioms).
Sure. That has no bearing on what I’m saying. You are still maximizing expectation of your utility. Your utility is not the function pre-transformation. The axioms apply only if the thing you are maximizing the expectation of is your utility function. There’s no reason to bring up applying a transformation to u to get a different u. You’re really not understanding me if you think that’s relevant.
Maximizing any linear combination of the u and the probabilities will describe the same set of preferences over gambles as maximizing E(u).
Not at all. You can multiply each probability by a different constant if you do that. Or you can multiply them all by −1, and you would be minimizing E(u).
Did you even read the next paragraph where I tried to explain why it does have a bearing on what you’re saying? Do you have a response?
I read it. I don’t understand why you keep bringing up “u”, whatever that is.
You use u to represent the utility function on a possible world. We don’t care what is inside that utility function for the purposes of this argument. And you can’t* get out of taking the expected value of your utility function by transforming it into another utility function. Then you just have to take the expected value of that new utility function.
Read steven0461′s comment above. He has it spot on.
But you still haven’t engaged with it at all. I’m going to give this one last go before I give up.
Utilitarianism starts with a set of functions describing each individuals’ welfare. To purge ourselves of any confusion over what u means, let’s call these w(i). It then defines W as the average (or the sum, the distinction isn’t relevant for the moment) of the w(i), and ranks certain (i.e. non-risky) states of the world higher if they have higher W. Depending on the type of utilitarianism you adopt, the w(i) could be defined in terms of pleasure, desire-satisfaction, or any number of other things.
The Savage/von Neuman-Morgenstern/Marschak approach starts from a set of axioms that consistent decision-makers are supposed to adhere to when faced with choices over gambles. It says that, for any consistent set of choices you might make, there exists a function f (mapping states of the world into real numbers), such that your choices correspond to maximizing E(f). As I think you realize, it puts no particular constraints on f.
Substituting distributions of individuals for probability distributions over the states of the world (and ignoring for the moment the other problems with this), the axioms now imply that for any consistent set of choices we might make, there exists a function f, such that our choices correspond to maximizing E(f).
Again, there are no particular constraints on f. As a result, (and this is the crucial part) nothing in the axioms say that f has to have anything to do with the w(i). Because the f does not have to have anything in particular to do with the w(i), E(f) does not have to have anything to do with W, and so the fact that we are maximizing E(f) says nothing about we are or should be good little utilitarians.
E(f) needn’t have anything to do with W. But it has to do with the f-values of all the different versions of you that there might be in the future (or in different Everett branches, or whatever). And it treats all of them symmetrically, looking only at their average and not at their distribution.
That is: the way in which you decide what to do can be expressed in terms of some preference you have about the state of the world, viewed from the perspective of one possible-future-you, but all those possible-future-yous have to be treated symmetrically, just averaging their preferences. (By “preference” I just mean anything that plays the role of f.)
So, Phil says, if that’s the only consistent way to treat all the different versions of you, then surely it’s also the only consistent way to treat all the different people in the world.
(This is of course the controversial bit. It’s far from obvious that you should see the possible-future-yous in the same way as you see the actual-other-people. For instance, because it’s more credible to think of those possible-future-yous as having the same utility function as one another. And because we all tend to care more about the welfare of our future selves than we do about other people’s. And so on.)
If so, then the following would be true: To act consistently, your actions must be such as to maximize the average of something (which, yes, need have nothing to do with the functions w, but it had better in some sense be the same something) over all actual and possible people.
I think Phil is wrong, but your criticisms don’t seem fair to me.
But it has to do with the f-values of all the different versions of you that there might be in the future (or in different Everett branches, or whatever).
I think this is part of the problem. Expected utility is based on epistemic uncertainty, which has nothing to do with objective frequency, including objective frequency across Everett branches or the like.
Nothing to do with objective frequency? Surely that’s wrong: e.g., as you gain information, your subjective probabilities ought to converge on the objective frequencies.
But I agree: the relevant sense of expectation here has to be the subjective one (since all an agent can use in deciding what to prefer is its own subjective probabilities, not whatever objective ones there may be), and this does seem like it’s a problem with what Phil’s saying.
the relevant sense of expectation here has to be the subjective one (since all an agent can use in deciding what to prefer is its own subjective probabilities, not whatever objective ones there may be), and this does seem like it’s a problem with what Phil’s saying.
It’s just as much a problem with all of decision theory, and all expectation maximization, as with anything I’m saying. This may be a difficulty, but it’s completely orthogonal to the issue at hand, since all of the alternatives have that same weakness.
I think the point is that, if probability is only in the mind, the analogy between averaging over future yous and averaging over other people is weaker than it might initially appear.
It sort of seems like there might be an analogy if you’re talking about averaging over versions of you that end up in different Everett branches. But if we’re talking about subjective probabilities, then there’s no sense in which these “future yous” exist, except in your mind, and it’s more difficult to see the analogy between averaging over them, and averaging over actual people.
Expected utility is based on epistemic uncertainty
What? So if you had accurate information about the probability distribution of outcomes, then you couldn’t use expected utility? I don’t think that’s right. In fact, it’s exactly the reverse. Expected utility doesn’t really work when you have epistemic uncertainty.
If I’m using subjective probabilities at all – if I don’t know the exact outcome – I’m working under uncertainty and using expected utility. If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
I don’t know that this affects your point, but I think we can make good sense of objective probabilities as being something else than either subjective probabilities or objective frequencies. See for example this.
If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
You still need to multiply utility of those events by their probability. Objective frequency is not as objective as it may seem, it’s just a point at which the posterior is no longer expected to change, given more information. Or, alternatively, it’s “physical probability”, a parameter in your model that has nothing to do with subjective probability and expected utility maximization, and has the status similar to that of, say, mass.
Expected utility is based on epistemic uncertainty, which has nothing to do with objective frequency
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
What was you original comment supposed to mean?
If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
This is the situation which the theory of expected utility was designed for. I think your claim is exactly backwards of what it should be.
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
This is not what is meant by epistemic uncertainty. In a framework of Bayesian probability theory, you start with a fixed, exactly defined prior distribution. The uncertainty comes from working with big events on state space, some of them coming in form of states of variables, as opposed to individual states. See Probability Space, Random Variable, also E.T. Jaynes (1990) “Probability Theory as Logic” may be helpful.
According to Wikipedia, that is what is meant by epistemic uncertainty. It says that one type of uncertainty is
“1. Uncertainty due to variability of input and / or model parameters when the characterization of the variability is available (e.g., with probability density functions, pdf),”
and that all other types of uncertainty are epistemic uncertainty.
And here’s a quote from “Separating natural and epistemic uncertainty in flood frequency analysis”, Bruno Merz and Annegret H. Thieken, J. of Hydrology 2004, which also agrees with me:
“Natural uncertainty stems from variability of the underlying stochastic process. Epistemic uncertainty results from incomplete knowledge about the process under study.”
This “natural uncertainty” is a property of distributions, while epistemic uncertainty to which you refer here corresponds to what I meant. When you have incomplete knowledge about the process under study, you are working with one of the multiple possible processes, you are operating inside a wide event that includes all these possibilities. I suspect you are still confusing the prior on global state space with marginal probability distributions on variables. Follow the links I gave before.
To act consistently, your actions must be such as to maximize the average of something
Yes, but maximizing the average of something implies neither utilitarianism nor indifference to equity, as Phil has claimed it does. I don’t see how pointing this out is unfair.
You understand what I’m saying, so I’d very much like to know why you think it’s wrong.
Note that I’m not claiming that average utilitarianism must be correct. The axioms could be unreasonable, or a strict proof could fail for some as-yet unknown reason. But I think the axioms are either reasonable in both cases, or unreasonable in both cases; and so expected-value maximization and average utilitarianism go together.
See (1) the paragraph in my comment above beginning “This is of course the controversial bit”, (2) Wei_Dai’s comment further down and my reply to it, and (3) Nick Tarleton’s (basically correct) objection to my description of E(f) as being derived from “the f-values of all the different versions of you”.
E(f) needn’t have anything to do with W. But it has to do with the f-values of all the different versions of you that there might be in the future.
I think this is part of where things are going wrong. f values aren’t things that each future version of me has. They are especially not the value that a particular future me places on a given outcome, or the preferences of that particular future me. f values are simply mathematical constructs built to formalize the choices that current me happens to make over gambles.
Despite its name, expected utility maximization is not actually averaging over the preferences of future me-s; it’s just averaging a more-or-less arbitrary function, that may or may not have anything to do with those preferences.
As a result, (and this is the crucial part) nothing in the axioms say that f has to have anything to do with the w(i).
Here we disagree. f is the utility function for a world state. If it were an arbitrary function, we’d have no reason to think that the axioms should hold for it. Positing the axioms is based on our commonsense notion of what utility is like.
I’m not assuming that there are a bunch of individual w(i) functions. Think instead of a situation where one person is calculating only their private utility. f is simply their utility function. You may be thinking that I have some definition of “utilitarianism” that places restrictions on f. “Average utilitarianism” does, but I don’t think “utilitarianism” does; and if it did, then I wouldn’t apply it here. The phrase “average utilitarianism” has not yet come into play in my argument by this point. All I ask at this point in the argument, is what the theorem asks—that there be a utility function for the outcome.
I thikn you’re thinking that I’m saying that the theorem says that f has to be a sum or average of the w(i), and therefore we have to be average utilitarians. That’s not what I’m saying at all. I tried to explain that already before. Read steven0461′s comment above, and my response to it.
The claim I am taking exception to is the claim that the vNM axioms provide support to (average) utilitarianism, or suggest that we need not be concerned with inequality. This is what I took your bullet points 6 and 8 (in the main post) to be suggesting (not to mention the title of the post!)
If you are not claiming either of these things, then I apologize for misunderstanding you. If you are claiming either of these things, then my criticisms stand.
As far as I can tell, most of your first two paragraphs are inaccurate descriptions of the theory. In particular, f is not just an individual’s private utility function. To the extent that the vNM argument generalizes in the way you want it to, f can be any monotonic transform of a private utility function, which means, amongst other things, that we are allowed to care about inequality, and (average) utilitarianism is not implied.
But I’ve repeated myself enough. I doubt this conversation is productive any more, if it ever was, so I’m going to forego adding any more noise from now on.
Read steven0461′s comment above, and my response to it.
I read both of them when they were originally posted, and have looked over them again at your exhortation, but have sadly not discovered whatever enlightenment you want me to find there.
The claim I am taking exception to is the claim that the vNM axioms provide support to (average) utilitarianism, or suggest that we need not be concerned with inequality. This is what I took your bullet points 6 and 8 (in the main post) to be suggesting (not to mention the title of the post!)
As steven0461 said,
If in all the axioms of the expected utility theorem you replace lotteries by distributions of individual welfare, then the theorem proves that you have to accept utilitarianism.
Not “proven”, really, but he’s got the idea.
As far as I can tell, most of your first two paragraphs are inaccurate descriptions of the theory. In particular, f is not just an individual’s private utility function. To the extent that the vNM argument generalizes in the way you want it to, f can be any monotonic transform of a private utility function, which means, amongst other things, that we are allowed to care about inequality, and (average) utilitarianism is not implied.
I am pretty confident that you’re mistaken. f is a utility function. Furthermore, it doesn’t matter that the vNM argument can apply to things that satisfy the axioms but aren’t utility functions, as long as it applies to the utility functions that we maximize when we are maximizing expected utility.
Either my first two bullet points are correct, or most of the highest-page-ranked explanations of the theory on the Web are wrong. So perhaps you could be specific about how they are wrong.
I understand what steven0461 said. I get the idea too, I just think it’s wrong. I’ve tried to explain why it’s wrong numerous times, but I’ve clearly failed, and don’t see myself making much further progress.
In lieu of further failed attempts to explain myself, I’m lodging a gratuitous appeal to Nobel Laureate authority, leaving some further references, and bowing out.
The following quote from Amartya Sen (1979) pretty much sums up my position (in the context of a similar debate between him and Harsanyi about the meaning of Harsanyi’s supposed axiomatic proof of utilitarianism).
[I]t is possible to define individual utilities in such a way that the only way of aggregating them is by summation. By confining his attention to utilities defined in that way, John Harsanyi has denied the credibility of “nonlinear social welfare functions.” That denial holds perfectly well for the utility measures to which Harsanyi confines his attention, but has no general validity outside that limited framework. Thus, sum-ranking remains an open issue to be discussed in terms of its moral merits-and in particular, our concern for equality of utilities-and cannot be “thrust upon” us on grounds of consistency.
Further refs, if anyone’s interested:
Harsanyi, John (1955), “Cardinal Welfare, Individualistic Ethics and Interpersonal Comparisons of Utility”, Journal of Political Economy 63. (Harsanyi’s axiomatic “proof” of utilitarianism.)
Diamond, P. (1967) “Cardinal Welfare, Individualistic Ethics and Interpersonal Comparisons of Utility: A Comment”, Journal of Political Economy 61
Harsanyi, John (1975) “Nonlinear Social Welfare Functions: Do Welfare Economists Have a Special Exemption from Bayesian Rationality?” Theory and Decision 6(3): 311-332.
Sen, Amartya (1976) “Welfare Inequalities and Rawlsian Axiomatics,” Theory and Decision, 7(4): 243-262 (reprinted in R. Butts and J. Hintikka eds., Foundational Problems in the Special Sciences (Boston: Reidel, 1977). (esp. section 2: Focuses on two objections to Haysanyi’s derivation: the first is the application of the independence axiom to social choice (as Wei Dai has pointed out), the second is the point that I’ve been making about the link to utilitarianism.)
Harsanyi, John (1977) “Nonlinear Social Welfare Functions: A Rejoinder to Professor Sen,” in Butts and Hintikka
Sen, Amartya (1977) “Non-linear Social Welfare Functions: A Reply to Professor Harsanyi,” in Butts and Hintikka
Sen, Amartya (1979) “Utilitarianism and Welfarism” The Journal of Philosophy 76(9): 463-489 (esp. section 2)
Parts of the Hintikka and Butts volume are available in Google Books.
(I’ll put these in the Harsanyi thread above as well.)
You know how the Reddit code is very clever, and you write a comment, and post it, and immediately see it on your screen?
Well, I just wrote the above comment out, and clicked “comment”, and it immediately appeared on my screen. And it had a score of 0 points when it first appeard.
And that’s the second time that’s happened to me.
Does this happen to anybody else? Is there some rule to the karma system that can make a comment have 0 starting points?
EDIT: This comment, too. Had 0 points at the start.
Sure. That has no bearing on what I’m saying. You are still maximizing expectation of your utility. Your utility is not the function pre-transformation. The axioms apply only if the thing you are maximizing the expectation of is your utility function. There’s no reason to bring up applying a transformation to u to get a different u. You’re really not understanding me if you think that’s relevant.
Not at all. You can multiply each probability by a different constant if you do that. Or you can multiply them all by −1, and you would be minimizing E(u).
Did you even read the next paragraph where I tried to explain why it does have a bearing on what you’re saying? Do you have a response?
Fair. I assumed a positive constant. I shouldn’t have.
I read it. I don’t understand why you keep bringing up “u”, whatever that is. You use u to represent the utility function on a possible world. We don’t care what is inside that utility function for the purposes of this argument. And you can’t* get out of taking the expected value of your utility function by transforming it into another utility function. Then you just have to take the expected value of that new utility function.
Read steven0461′s comment above. He has it spot on.
But you still haven’t engaged with it at all. I’m going to give this one last go before I give up.
Utilitarianism starts with a set of functions describing each individuals’ welfare. To purge ourselves of any confusion over what u means, let’s call these w(i). It then defines W as the average (or the sum, the distinction isn’t relevant for the moment) of the w(i), and ranks certain (i.e. non-risky) states of the world higher if they have higher W. Depending on the type of utilitarianism you adopt, the w(i) could be defined in terms of pleasure, desire-satisfaction, or any number of other things.
The Savage/von Neuman-Morgenstern/Marschak approach starts from a set of axioms that consistent decision-makers are supposed to adhere to when faced with choices over gambles. It says that, for any consistent set of choices you might make, there exists a function f (mapping states of the world into real numbers), such that your choices correspond to maximizing E(f). As I think you realize, it puts no particular constraints on f.
Substituting distributions of individuals for probability distributions over the states of the world (and ignoring for the moment the other problems with this), the axioms now imply that for any consistent set of choices we might make, there exists a function f, such that our choices correspond to maximizing E(f).
Again, there are no particular constraints on f. As a result, (and this is the crucial part) nothing in the axioms say that f has to have anything to do with the w(i). Because the f does not have to have anything in particular to do with the w(i), E(f) does not have to have anything to do with W, and so the fact that we are maximizing E(f) says nothing about we are or should be good little utilitarians.
I think you may be missing PhilGoetz’s point.
E(f) needn’t have anything to do with W. But it has to do with the f-values of all the different versions of you that there might be in the future (or in different Everett branches, or whatever). And it treats all of them symmetrically, looking only at their average and not at their distribution.
That is: the way in which you decide what to do can be expressed in terms of some preference you have about the state of the world, viewed from the perspective of one possible-future-you, but all those possible-future-yous have to be treated symmetrically, just averaging their preferences. (By “preference” I just mean anything that plays the role of f.)
So, Phil says, if that’s the only consistent way to treat all the different versions of you, then surely it’s also the only consistent way to treat all the different people in the world.
(This is of course the controversial bit. It’s far from obvious that you should see the possible-future-yous in the same way as you see the actual-other-people. For instance, because it’s more credible to think of those possible-future-yous as having the same utility function as one another. And because we all tend to care more about the welfare of our future selves than we do about other people’s. And so on.)
If so, then the following would be true: To act consistently, your actions must be such as to maximize the average of something (which, yes, need have nothing to do with the functions w, but it had better in some sense be the same something) over all actual and possible people.
I think Phil is wrong, but your criticisms don’t seem fair to me.
I think this is part of the problem. Expected utility is based on epistemic uncertainty, which has nothing to do with objective frequency, including objective frequency across Everett branches or the like.
Nothing to do with objective frequency? Surely that’s wrong: e.g., as you gain information, your subjective probabilities ought to converge on the objective frequencies.
But I agree: the relevant sense of expectation here has to be the subjective one (since all an agent can use in deciding what to prefer is its own subjective probabilities, not whatever objective ones there may be), and this does seem like it’s a problem with what Phil’s saying.
It’s just as much a problem with all of decision theory, and all expectation maximization, as with anything I’m saying. This may be a difficulty, but it’s completely orthogonal to the issue at hand, since all of the alternatives have that same weakness.
I think the point is that, if probability is only in the mind, the analogy between averaging over future yous and averaging over other people is weaker than it might initially appear.
It sort of seems like there might be an analogy if you’re talking about averaging over versions of you that end up in different Everett branches. But if we’re talking about subjective probabilities, then there’s no sense in which these “future yous” exist, except in your mind, and it’s more difficult to see the analogy between averaging over them, and averaging over actual people.
What? So if you had accurate information about the probability distribution of outcomes, then you couldn’t use expected utility? I don’t think that’s right. In fact, it’s exactly the reverse. Expected utility doesn’t really work when you have epistemic uncertainty.
What does “accurate information about the probability distribution” mean? Probability is in the mind.
If I’m using subjective probabilities at all – if I don’t know the exact outcome – I’m working under uncertainty and using expected utility. If, in a multiverse, I know with certainty the objective frequency distribution of single-world outcomes, then yes, I just pick the action with the highest utility.
I don’t know that this affects your point, but I think we can make good sense of objective probabilities as being something else than either subjective probabilities or objective frequencies. See for example this.
You still need to multiply utility of those events by their probability. Objective frequency is not as objective as it may seem, it’s just a point at which the posterior is no longer expected to change, given more information. Or, alternatively, it’s “physical probability”, a parameter in your model that has nothing to do with subjective probability and expected utility maximization, and has the status similar to that of, say, mass.
You said
If you have accurate information about the probability distribution, you don’t have epistemic uncertainty.
What was you original comment supposed to mean?
This is the situation which the theory of expected utility was designed for. I think your claim is exactly backwards of what it should be.
This is not what is meant by epistemic uncertainty. In a framework of Bayesian probability theory, you start with a fixed, exactly defined prior distribution. The uncertainty comes from working with big events on state space, some of them coming in form of states of variables, as opposed to individual states. See Probability Space, Random Variable, also E.T. Jaynes (1990) “Probability Theory as Logic” may be helpful.
According to Wikipedia, that is what is meant by epistemic uncertainty. It says that one type of uncertainty is
“1. Uncertainty due to variability of input and / or model parameters when the characterization of the variability is available (e.g., with probability density functions, pdf),”
and that all other types of uncertainty are epistemic uncertainty.
And here’s a quote from “Separating natural and epistemic uncertainty in flood frequency analysis”, Bruno Merz and Annegret H. Thieken, J. of Hydrology 2004, which also agrees with me:
“Natural uncertainty stems from variability of the underlying stochastic process. Epistemic uncertainty results from incomplete knowledge about the process under study.”
This “natural uncertainty” is a property of distributions, while epistemic uncertainty to which you refer here corresponds to what I meant. When you have incomplete knowledge about the process under study, you are working with one of the multiple possible processes, you are operating inside a wide event that includes all these possibilities. I suspect you are still confusing the prior on global state space with marginal probability distributions on variables. Follow the links I gave before.
Yes, but maximizing the average of something implies neither utilitarianism nor indifference to equity, as Phil has claimed it does. I don’t see how pointing this out is unfair.
You understand what I’m saying, so I’d very much like to know why you think it’s wrong.
Note that I’m not claiming that average utilitarianism must be correct. The axioms could be unreasonable, or a strict proof could fail for some as-yet unknown reason. But I think the axioms are either reasonable in both cases, or unreasonable in both cases; and so expected-value maximization and average utilitarianism go together.
See (1) the paragraph in my comment above beginning “This is of course the controversial bit”, (2) Wei_Dai’s comment further down and my reply to it, and (3) Nick Tarleton’s (basically correct) objection to my description of E(f) as being derived from “the f-values of all the different versions of you”.
I think this is part of where things are going wrong. f values aren’t things that each future version of me has. They are especially not the value that a particular future me places on a given outcome, or the preferences of that particular future me. f values are simply mathematical constructs built to formalize the choices that current me happens to make over gambles.
Despite its name, expected utility maximization is not actually averaging over the preferences of future me-s; it’s just averaging a more-or-less arbitrary function, that may or may not have anything to do with those preferences.
Here we disagree. f is the utility function for a world state. If it were an arbitrary function, we’d have no reason to think that the axioms should hold for it. Positing the axioms is based on our commonsense notion of what utility is like.
I’m not assuming that there are a bunch of individual w(i) functions. Think instead of a situation where one person is calculating only their private utility. f is simply their utility function. You may be thinking that I have some definition of “utilitarianism” that places restrictions on f. “Average utilitarianism” does, but I don’t think “utilitarianism” does; and if it did, then I wouldn’t apply it here. The phrase “average utilitarianism” has not yet come into play in my argument by this point. All I ask at this point in the argument, is what the theorem asks—that there be a utility function for the outcome.
I thikn you’re thinking that I’m saying that the theorem says that f has to be a sum or average of the w(i), and therefore we have to be average utilitarians. That’s not what I’m saying at all. I tried to explain that already before. Read steven0461′s comment above, and my response to it.
The claim I am taking exception to is the claim that the vNM axioms provide support to (average) utilitarianism, or suggest that we need not be concerned with inequality. This is what I took your bullet points 6 and 8 (in the main post) to be suggesting (not to mention the title of the post!)
If you are not claiming either of these things, then I apologize for misunderstanding you. If you are claiming either of these things, then my criticisms stand.
As far as I can tell, most of your first two paragraphs are inaccurate descriptions of the theory. In particular, f is not just an individual’s private utility function. To the extent that the vNM argument generalizes in the way you want it to, f can be any monotonic transform of a private utility function, which means, amongst other things, that we are allowed to care about inequality, and (average) utilitarianism is not implied.
But I’ve repeated myself enough. I doubt this conversation is productive any more, if it ever was, so I’m going to forego adding any more noise from now on.
I read both of them when they were originally posted, and have looked over them again at your exhortation, but have sadly not discovered whatever enlightenment you want me to find there.
As steven0461 said,
Not “proven”, really, but he’s got the idea.
I am pretty confident that you’re mistaken. f is a utility function. Furthermore, it doesn’t matter that the vNM argument can apply to things that satisfy the axioms but aren’t utility functions, as long as it applies to the utility functions that we maximize when we are maximizing expected utility.
Either my first two bullet points are correct, or most of the highest-page-ranked explanations of the theory on the Web are wrong. So perhaps you could be specific about how they are wrong.
I understand what steven0461 said. I get the idea too, I just think it’s wrong. I’ve tried to explain why it’s wrong numerous times, but I’ve clearly failed, and don’t see myself making much further progress.
In lieu of further failed attempts to explain myself, I’m lodging a gratuitous appeal to Nobel Laureate authority, leaving some further references, and bowing out.
The following quote from Amartya Sen (1979) pretty much sums up my position (in the context of a similar debate between him and Harsanyi about the meaning of Harsanyi’s supposed axiomatic proof of utilitarianism).
Further refs, if anyone’s interested:
Harsanyi, John (1955), “Cardinal Welfare, Individualistic Ethics and Interpersonal Comparisons of Utility”, Journal of Political Economy 63. (Harsanyi’s axiomatic “proof” of utilitarianism.)
Diamond, P. (1967) “Cardinal Welfare, Individualistic Ethics and Interpersonal Comparisons of Utility: A Comment”, Journal of Political Economy 61
Harsanyi, John (1975) “Nonlinear Social Welfare Functions: Do Welfare Economists Have a Special Exemption from Bayesian Rationality?” Theory and Decision 6(3): 311-332.
Sen, Amartya (1976) “Welfare Inequalities and Rawlsian Axiomatics,” Theory and Decision, 7(4): 243-262 (reprinted in R. Butts and J. Hintikka eds., Foundational Problems in the Special Sciences (Boston: Reidel, 1977). (esp. section 2: Focuses on two objections to Haysanyi’s derivation: the first is the application of the independence axiom to social choice (as Wei Dai has pointed out), the second is the point that I’ve been making about the link to utilitarianism.)
Harsanyi, John (1977) “Nonlinear Social Welfare Functions: A Rejoinder to Professor Sen,” in Butts and Hintikka
Sen, Amartya (1977) “Non-linear Social Welfare Functions: A Reply to Professor Harsanyi,” in Butts and Hintikka
Sen, Amartya (1979) “Utilitarianism and Welfarism” The Journal of Philosophy 76(9): 463-489 (esp. section 2)
Parts of the Hintikka and Butts volume are available in Google Books.
(I’ll put these in the Harsanyi thread above as well.)
You know how the Reddit code is very clever, and you write a comment, and post it, and immediately see it on your screen?
Well, I just wrote the above comment out, and clicked “comment”, and it immediately appeared on my screen. And it had a score of 0 points when it first appeard.
And that’s the second time that’s happened to me.
Does this happen to anybody else? Is there some rule to the karma system that can make a comment have 0 starting points?
EDIT: This comment, too. Had 0 points at the start.