None of what you have linked so far has particularly conveyed any new information to me, so I think I just flatly disagree with you. As that link says, the “utility” in utilitarianism just means some metric or metrics of “good”. People disagree about what exactly should go into “good” here, but godshatter refers to all the terminal values humans have, so that seems like a perfectly fine candidate for what the “utility” in utilitarianism ought to be. The classic “higher pleasures” in utilitarianism lends credence toward this fitting into the classical framework; it is not a new idea that utilitarianism can include multiple terminal values with relative weighting.
Under utilitarianism, we are then supposed to maximize this utility. That is, maximize the satisfaction of the various terminal goals we are taking as good, aggregated into a single metric. And separately, there happens to be this elegant idea called “utility theory”, which tells us that if you have various preferences you are trying to maximize, there is a uniquely rational way to do that, which involves giving them relative weights and aggregating into a single metric… You seriously think there’s no connection here? I honestly thought all this was obvious.
In that last link, they say “Now, it is sometimes claimed that one may use decision-theoretic utility as one possible implementation of the utilitarian’s ‘utility’” then go on to say why this is wrong, but I don’t find it to be a knockdown argument; that is basically what I believe and I think I stand by it. Like, if you plug “aggregate human well-being along all relevant dimensions” into the utility of utility theory, I don’t see how you don’t get exactly utilitarianism out of that, or at least one version of it?
EDIT: Please also see in the above post under “You should never try to reason using expected utilities again. It is an art not meant for you. Stick to intuitive feelings henceforth.” It seems to me that Eliezer goes on to consistently use the “expected utilities” of utility theory to be synonymous to the “utilities” of utilitarianism and the “consequences” of consequentialism. Do you agree that he’s doing this? If so, I assume you think he’s wrong for doing it? Eliezer tends to call himself a utilitarian. Do you agree that he is one, or is he something else? What would you call “using expected utility theory to make moral decisions, taking the terminal value to be human well-being”?
In that last link, they say “Now, it is sometimes claimed that one may use decision-theoretic utility as one possible implementation of the utilitarian’s ‘utility’” then go on to say why this is wrong, but I don’t find it to be a knockdown argument; that is basically what I believe and I think I stand by it. Like, if you plug “aggregate human well-being along all relevant dimensions” into the utility of utility theory, I don’t see how you don’t get exactly utilitarianism out of that, or at least one version of it?
You don’t get utilitarianism out of it because, as explained at the link, VNM utility is incomparable between agents (and therefore cannot be aggregated across agents). There are no versions of utilitarianism that can be constructed out of decision-theoretic utility. This is an inseparable part of the VNM formalism.
That having been said, even if it were possible to use VNM utility as the “utility” of utilitarianism (again, it is definitely not!), that still wouldn’t make them the same theory, or necessarily connected, or conceptually identical, or conceptually related, etc. Decision-theoretic expected utility theory isn’t a moral theory at all.
Really, this is all explained in the linked post…
Re: the “EDIT:” part:
It seems to me that Eliezer goes on to consistently use the “expected utilities” of utility theory to be synonymous to the “utilities” of utilitarianism and the “consequences” of consequentialism. Do you agree that he’s doing this?
No, I do not agree that he’s doing this.
Eliezer tends to call himself a utilitarian. Do you agree that he is one, or is he something else?
What would you call “using expected utility theory to make moral decisions, taking the terminal value to be human well-being”?
I would call that “being confused”.
How to (coherently, accurately, etc.) map “human well-being” (whatever that is) to any usable scalar (not vector!) “utility” which you can then maximize the expectation of, is probably the biggest challenge and obstacle to any attempt at formulating a moral theory around the intuition you describe. (“Utilitarianism using VNM utility” is a classic failed and provably unworkable attempt at doing this.)
If you don’t have any way of doing this, you don’t have a moral theory—you have nothing.
None of what you have linked so far has particularly conveyed any new information to me, so I think I just flatly disagree with you. As that link says, the “utility” in utilitarianism just means some metric or metrics of “good”. People disagree about what exactly should go into “good” here, but godshatter refers to all the terminal values humans have, so that seems like a perfectly fine candidate for what the “utility” in utilitarianism ought to be. The classic “higher pleasures” in utilitarianism lends credence toward this fitting into the classical framework; it is not a new idea that utilitarianism can include multiple terminal values with relative weighting.
Under utilitarianism, we are then supposed to maximize this utility. That is, maximize the satisfaction of the various terminal goals we are taking as good, aggregated into a single metric. And separately, there happens to be this elegant idea called “utility theory”, which tells us that if you have various preferences you are trying to maximize, there is a uniquely rational way to do that, which involves giving them relative weights and aggregating into a single metric… You seriously think there’s no connection here? I honestly thought all this was obvious.
In that last link, they say “Now, it is sometimes claimed that one may use decision-theoretic utility as one possible implementation of the utilitarian’s ‘utility’” then go on to say why this is wrong, but I don’t find it to be a knockdown argument; that is basically what I believe and I think I stand by it. Like, if you plug “aggregate human well-being along all relevant dimensions” into the utility of utility theory, I don’t see how you don’t get exactly utilitarianism out of that, or at least one version of it?
EDIT: Please also see in the above post under “You should never try to reason using expected utilities again. It is an art not meant for you. Stick to intuitive feelings henceforth.” It seems to me that Eliezer goes on to consistently use the “expected utilities” of utility theory to be synonymous to the “utilities” of utilitarianism and the “consequences” of consequentialism. Do you agree that he’s doing this? If so, I assume you think he’s wrong for doing it? Eliezer tends to call himself a utilitarian. Do you agree that he is one, or is he something else? What would you call “using expected utility theory to make moral decisions, taking the terminal value to be human well-being”?
You don’t get utilitarianism out of it because, as explained at the link, VNM utility is incomparable between agents (and therefore cannot be aggregated across agents). There are no versions of utilitarianism that can be constructed out of decision-theoretic utility. This is an inseparable part of the VNM formalism.
That having been said, even if it were possible to use VNM utility as the “utility” of utilitarianism (again, it is definitely not!), that still wouldn’t make them the same theory, or necessarily connected, or conceptually identical, or conceptually related, etc. Decision-theoretic expected utility theory isn’t a moral theory at all.
Really, this is all explained in the linked post…
Re: the “EDIT:” part:
No, I do not agree that he’s doing this.
Yes, he’s a utilitarian. (“Torture vs. Dust Specks” is a paradigmatic utilitarian argument.)
I would call that “being confused”.
How to (coherently, accurately, etc.) map “human well-being” (whatever that is) to any usable scalar (not vector!) “utility” which you can then maximize the expectation of, is probably the biggest challenge and obstacle to any attempt at formulating a moral theory around the intuition you describe. (“Utilitarianism using VNM utility” is a classic failed and provably unworkable attempt at doing this.)
If you don’t have any way of doing this, you don’t have a moral theory—you have nothing.