Your definition is wrong; I think that way of defining ‘utilitarianism’ is purely an invention of a few LWers who didn’t understand what the term meant and got it mixed up with ‘utility function’. AFAIK, there’s no field where ‘utilitarianism’ has ever been used to mean ‘having a utility function’.
Hm, I worry I might be a confused LWer. I definitely agree that “having a utility function” and “being a utilitarian” are not identical concepts, but they’re highly related, no? Would you agree that, to a first-approximation, being a utilitarian means having a utility function with the evolutionary godshatter as terminal values? Even this is not identical to the original philosophical meaning I suppose, but it seems highly similar, and it is what I thought people around here meant.
Would you agree that, to a first-approximation, being a utilitarian means having a utility function with the evolutionary godshatter as terminal values?
This is not even close to correct, I’m afraid. In fact being a utilitarian has nothing whatever to do with the concept of a utility function. (Nor—separately—does it have much to do with “evolutionary godshatter” as values; I am not sure where you got this idea!)
Please read this page for some more info presented in a systematic way.
I meant to convey a utility function with certain human values as terminal values, such as pleasure, freedom, beauty, etc.; godshatter was a stand-in.
If the idea of a utility function has literally nothing to do with moral utilitarianism, even around here, I would question why in the above when Eliezer is discussing moral questions he references expected utility calculations? I would also point to “intuitions behind utilitarianism“ as pointing at connections between the two? Or “shut up and multiply”? Need I go on?
I know classical utilitarianism is not exactly the same, but even in what you linked, it talks about maximizing the total sum of human happiness and sacrificing some goods for others, measured under a single metric “utility”. That sounds an awful lot like a utility function trading off human terminal values? I don’t see how what I’m pointing at isn’t just a straightforward idealization of classical utilitarianism.
I meant to convey a utility function with certain human values as terminal values, such as pleasure, freedom, beauty, etc.; godshatter was a stand-in.
Yes, I understood your meaning. My response stands.
If the idea of a utility function has literally nothing to do with moral utilitarianism, even around here, I would question why in the above when Eliezer is discussing moral questions he references expected utility calculations?
What is the connection? Expected utility calculations can be, and are, relevant to all sorts of things, without being identical to, or similar to, or inherently connected with, etc., utilitarianism.
I would also point to “intuitions behind utilitarianism“ as pointing at connections between the two? Or “shut up and multiply”? Need I go on?
The linked post makes some subtle points, as well as some subtle mistakes (or, perhaps, instances of unclear writing on Eliezer’s part; it’s hard to tell).
I know classical utilitarianism is not exactly the same, but even in what you linked, it talks about maximizing the total sum of human happiness and sacrificing some goods for others, measured under a single metric “utility”. That sounds an awful lot like a utility function trading off human terminal values? I don’t see how what I’m pointing at isn’t just a straightforward idealization of classical utilitarianism.
The “utility” of utilitarianism and the “utility” of expected utility theory are two very different concepts that, quite unfortunately and confusingly, share a term. This is a terminological conflation, in other words.
None of what you have linked so far has particularly conveyed any new information to me, so I think I just flatly disagree with you. As that link says, the “utility” in utilitarianism just means some metric or metrics of “good”. People disagree about what exactly should go into “good” here, but godshatter refers to all the terminal values humans have, so that seems like a perfectly fine candidate for what the “utility” in utilitarianism ought to be. The classic “higher pleasures” in utilitarianism lends credence toward this fitting into the classical framework; it is not a new idea that utilitarianism can include multiple terminal values with relative weighting.
Under utilitarianism, we are then supposed to maximize this utility. That is, maximize the satisfaction of the various terminal goals we are taking as good, aggregated into a single metric. And separately, there happens to be this elegant idea called “utility theory”, which tells us that if you have various preferences you are trying to maximize, there is a uniquely rational way to do that, which involves giving them relative weights and aggregating into a single metric… You seriously think there’s no connection here? I honestly thought all this was obvious.
In that last link, they say “Now, it is sometimes claimed that one may use decision-theoretic utility as one possible implementation of the utilitarian’s ‘utility’” then go on to say why this is wrong, but I don’t find it to be a knockdown argument; that is basically what I believe and I think I stand by it. Like, if you plug “aggregate human well-being along all relevant dimensions” into the utility of utility theory, I don’t see how you don’t get exactly utilitarianism out of that, or at least one version of it?
EDIT: Please also see in the above post under “You should never try to reason using expected utilities again. It is an art not meant for you. Stick to intuitive feelings henceforth.” It seems to me that Eliezer goes on to consistently use the “expected utilities” of utility theory to be synonymous to the “utilities” of utilitarianism and the “consequences” of consequentialism. Do you agree that he’s doing this? If so, I assume you think he’s wrong for doing it? Eliezer tends to call himself a utilitarian. Do you agree that he is one, or is he something else? What would you call “using expected utility theory to make moral decisions, taking the terminal value to be human well-being”?
In that last link, they say “Now, it is sometimes claimed that one may use decision-theoretic utility as one possible implementation of the utilitarian’s ‘utility’” then go on to say why this is wrong, but I don’t find it to be a knockdown argument; that is basically what I believe and I think I stand by it. Like, if you plug “aggregate human well-being along all relevant dimensions” into the utility of utility theory, I don’t see how you don’t get exactly utilitarianism out of that, or at least one version of it?
You don’t get utilitarianism out of it because, as explained at the link, VNM utility is incomparable between agents (and therefore cannot be aggregated across agents). There are no versions of utilitarianism that can be constructed out of decision-theoretic utility. This is an inseparable part of the VNM formalism.
That having been said, even if it were possible to use VNM utility as the “utility” of utilitarianism (again, it is definitely not!), that still wouldn’t make them the same theory, or necessarily connected, or conceptually identical, or conceptually related, etc. Decision-theoretic expected utility theory isn’t a moral theory at all.
Really, this is all explained in the linked post…
Re: the “EDIT:” part:
It seems to me that Eliezer goes on to consistently use the “expected utilities” of utility theory to be synonymous to the “utilities” of utilitarianism and the “consequences” of consequentialism. Do you agree that he’s doing this?
No, I do not agree that he’s doing this.
Eliezer tends to call himself a utilitarian. Do you agree that he is one, or is he something else?
What would you call “using expected utility theory to make moral decisions, taking the terminal value to be human well-being”?
I would call that “being confused”.
How to (coherently, accurately, etc.) map “human well-being” (whatever that is) to any usable scalar (not vector!) “utility” which you can then maximize the expectation of, is probably the biggest challenge and obstacle to any attempt at formulating a moral theory around the intuition you describe. (“Utilitarianism using VNM utility” is a classic failed and provably unworkable attempt at doing this.)
If you don’t have any way of doing this, you don’t have a moral theory—you have nothing.
If the idea of a utility function has literally nothing to do with moral utilitarianism, even around here, I would question why in the above when Eliezer is discussing moral questions he references expected utility calculations
If he has a proof that utilitarianism, as usually defined the highly altruistic ethical theory, is equivalent to maximization of an arbitrary UF , given some considerations about coherence, then he has something extraordinary that should be widely I own.
Or he is using “utilitarianism” in a weird way. ..or he is not and he is just confused.
I said nothing about an arbitrary utility function (nor proof for that matter). I was saying that applying utility theory to a specific set of terminal values seems to basically get you an idealized version of utilitarianism, which is what I thought the standard moral theory was around here.
If you know the utility function that is objectively correct, then you have the correct metaethics, and VnM style utility maximisation only tells you how implement it efficiently.
The first thing is “utilitarianism is true”, the second thing is “rationality is useful”.
But that goes back to the issue everyone criticises: EY recommends an object level decision...prefer torture to dust specks… unconditionally without knowing the reader’s UF.
If he had succeeded in arguing, or even tried to tried to argue that there is one true objective UF, then he would be in a position to hand out unconditional advice.
Or if he could show that preferring torture to dust specks was rational given an arbitrary UF, then he could also hand out unconditional advice (in the sense that the conditioning on an subjective UF doesn’t make a difference,). But he doesn’t do that, because if someone has a UF that places negative infinity utility on torture, that’s not up for grabs… their personal UF is what it is .
Your definition is wrong; I think that way of defining ‘utilitarianism’ is purely an invention of a few LWers who didn’t understand what the term meant and got it mixed up with ‘utility function’. AFAIK, there’s no field where ‘utilitarianism’ has ever been used to mean ‘having a utility function’.
I had this confusion for a few years. It personally made me dislike the term utilitarian, because it really mismatched my internal ontology.
Hm, I worry I might be a confused LWer. I definitely agree that “having a utility function” and “being a utilitarian” are not identical concepts, but they’re highly related, no? Would you agree that, to a first-approximation, being a utilitarian means having a utility function with the evolutionary godshatter as terminal values? Even this is not identical to the original philosophical meaning I suppose, but it seems highly similar, and it is what I thought people around here meant.
This is not even close to correct, I’m afraid. In fact being a utilitarian has nothing whatever to do with the concept of a utility function. (Nor—separately—does it have much to do with “evolutionary godshatter” as values; I am not sure where you got this idea!)
Please read this page for some more info presented in a systematic way.
I meant to convey a utility function with certain human values as terminal values, such as pleasure, freedom, beauty, etc.; godshatter was a stand-in.
If the idea of a utility function has literally nothing to do with moral utilitarianism, even around here, I would question why in the above when Eliezer is discussing moral questions he references expected utility calculations? I would also point to “intuitions behind utilitarianism“ as pointing at connections between the two? Or “shut up and multiply”? Need I go on?
I know classical utilitarianism is not exactly the same, but even in what you linked, it talks about maximizing the total sum of human happiness and sacrificing some goods for others, measured under a single metric “utility”. That sounds an awful lot like a utility function trading off human terminal values? I don’t see how what I’m pointing at isn’t just a straightforward idealization of classical utilitarianism.
Yes, I understood your meaning. My response stands.
What is the connection? Expected utility calculations can be, and are, relevant to all sorts of things, without being identical to, or similar to, or inherently connected with, etc., utilitarianism.
The linked post makes some subtle points, as well as some subtle mistakes (or, perhaps, instances of unclear writing on Eliezer’s part; it’s hard to tell).
The “utility” of utilitarianism and the “utility” of expected utility theory are two very different concepts that, quite unfortunately and confusingly, share a term. This is a terminological conflation, in other words.
Here is a long explanation of the difference.
None of what you have linked so far has particularly conveyed any new information to me, so I think I just flatly disagree with you. As that link says, the “utility” in utilitarianism just means some metric or metrics of “good”. People disagree about what exactly should go into “good” here, but godshatter refers to all the terminal values humans have, so that seems like a perfectly fine candidate for what the “utility” in utilitarianism ought to be. The classic “higher pleasures” in utilitarianism lends credence toward this fitting into the classical framework; it is not a new idea that utilitarianism can include multiple terminal values with relative weighting.
Under utilitarianism, we are then supposed to maximize this utility. That is, maximize the satisfaction of the various terminal goals we are taking as good, aggregated into a single metric. And separately, there happens to be this elegant idea called “utility theory”, which tells us that if you have various preferences you are trying to maximize, there is a uniquely rational way to do that, which involves giving them relative weights and aggregating into a single metric… You seriously think there’s no connection here? I honestly thought all this was obvious.
In that last link, they say “Now, it is sometimes claimed that one may use decision-theoretic utility as one possible implementation of the utilitarian’s ‘utility’” then go on to say why this is wrong, but I don’t find it to be a knockdown argument; that is basically what I believe and I think I stand by it. Like, if you plug “aggregate human well-being along all relevant dimensions” into the utility of utility theory, I don’t see how you don’t get exactly utilitarianism out of that, or at least one version of it?
EDIT: Please also see in the above post under “You should never try to reason using expected utilities again. It is an art not meant for you. Stick to intuitive feelings henceforth.” It seems to me that Eliezer goes on to consistently use the “expected utilities” of utility theory to be synonymous to the “utilities” of utilitarianism and the “consequences” of consequentialism. Do you agree that he’s doing this? If so, I assume you think he’s wrong for doing it? Eliezer tends to call himself a utilitarian. Do you agree that he is one, or is he something else? What would you call “using expected utility theory to make moral decisions, taking the terminal value to be human well-being”?
You don’t get utilitarianism out of it because, as explained at the link, VNM utility is incomparable between agents (and therefore cannot be aggregated across agents). There are no versions of utilitarianism that can be constructed out of decision-theoretic utility. This is an inseparable part of the VNM formalism.
That having been said, even if it were possible to use VNM utility as the “utility” of utilitarianism (again, it is definitely not!), that still wouldn’t make them the same theory, or necessarily connected, or conceptually identical, or conceptually related, etc. Decision-theoretic expected utility theory isn’t a moral theory at all.
Really, this is all explained in the linked post…
Re: the “EDIT:” part:
No, I do not agree that he’s doing this.
Yes, he’s a utilitarian. (“Torture vs. Dust Specks” is a paradigmatic utilitarian argument.)
I would call that “being confused”.
How to (coherently, accurately, etc.) map “human well-being” (whatever that is) to any usable scalar (not vector!) “utility” which you can then maximize the expectation of, is probably the biggest challenge and obstacle to any attempt at formulating a moral theory around the intuition you describe. (“Utilitarianism using VNM utility” is a classic failed and provably unworkable attempt at doing this.)
If you don’t have any way of doing this, you don’t have a moral theory—you have nothing.
If he has a proof that utilitarianism, as usually defined the highly altruistic ethical theory, is equivalent to maximization of an arbitrary UF , given some considerations about coherence, then he has something extraordinary that should be widely I own.
Or he is using “utilitarianism” in a weird way. ..or he is not and he is just confused.
I said nothing about an arbitrary utility function (nor proof for that matter). I was saying that applying utility theory to a specific set of terminal values seems to basically get you an idealized version of utilitarianism, which is what I thought the standard moral theory was around here.
If you know the utility function that is objectively correct, then you have the correct metaethics, and VnM style utility maximisation only tells you how implement it efficiently.
The first thing is “utilitarianism is true”, the second thing is “rationality is useful”.
But that goes back to the issue everyone criticises: EY recommends an object level decision...prefer torture to dust specks… unconditionally without knowing the reader’s UF.
If he had succeeded in arguing, or even tried to tried to argue that there is one true objective UF, then he would be in a position to hand out unconditional advice.
Or if he could show that preferring torture to dust specks was rational given an arbitrary UF, then he could also hand out unconditional advice (in the sense that the conditioning on an subjective UF doesn’t make a difference,). But he doesn’t do that, because if someone has a UF that places negative infinity utility on torture, that’s not up for grabs… their personal UF is what it is .