In that last link, they say “Now, it is sometimes claimed that one may use decision-theoretic utility as one possible implementation of the utilitarian’s ‘utility’” then go on to say why this is wrong, but I don’t find it to be a knockdown argument; that is basically what I believe and I think I stand by it. Like, if you plug “aggregate human well-being along all relevant dimensions” into the utility of utility theory, I don’t see how you don’t get exactly utilitarianism out of that, or at least one version of it?
You don’t get utilitarianism out of it because, as explained at the link, VNM utility is incomparable between agents (and therefore cannot be aggregated across agents). There are no versions of utilitarianism that can be constructed out of decision-theoretic utility. This is an inseparable part of the VNM formalism.
That having been said, even if it were possible to use VNM utility as the “utility” of utilitarianism (again, it is definitely not!), that still wouldn’t make them the same theory, or necessarily connected, or conceptually identical, or conceptually related, etc. Decision-theoretic expected utility theory isn’t a moral theory at all.
Really, this is all explained in the linked post…
Re: the “EDIT:” part:
It seems to me that Eliezer goes on to consistently use the “expected utilities” of utility theory to be synonymous to the “utilities” of utilitarianism and the “consequences” of consequentialism. Do you agree that he’s doing this?
No, I do not agree that he’s doing this.
Eliezer tends to call himself a utilitarian. Do you agree that he is one, or is he something else?
What would you call “using expected utility theory to make moral decisions, taking the terminal value to be human well-being”?
I would call that “being confused”.
How to (coherently, accurately, etc.) map “human well-being” (whatever that is) to any usable scalar (not vector!) “utility” which you can then maximize the expectation of, is probably the biggest challenge and obstacle to any attempt at formulating a moral theory around the intuition you describe. (“Utilitarianism using VNM utility” is a classic failed and provably unworkable attempt at doing this.)
If you don’t have any way of doing this, you don’t have a moral theory—you have nothing.
You don’t get utilitarianism out of it because, as explained at the link, VNM utility is incomparable between agents (and therefore cannot be aggregated across agents). There are no versions of utilitarianism that can be constructed out of decision-theoretic utility. This is an inseparable part of the VNM formalism.
That having been said, even if it were possible to use VNM utility as the “utility” of utilitarianism (again, it is definitely not!), that still wouldn’t make them the same theory, or necessarily connected, or conceptually identical, or conceptually related, etc. Decision-theoretic expected utility theory isn’t a moral theory at all.
Really, this is all explained in the linked post…
Re: the “EDIT:” part:
No, I do not agree that he’s doing this.
Yes, he’s a utilitarian. (“Torture vs. Dust Specks” is a paradigmatic utilitarian argument.)
I would call that “being confused”.
How to (coherently, accurately, etc.) map “human well-being” (whatever that is) to any usable scalar (not vector!) “utility” which you can then maximize the expectation of, is probably the biggest challenge and obstacle to any attempt at formulating a moral theory around the intuition you describe. (“Utilitarianism using VNM utility” is a classic failed and provably unworkable attempt at doing this.)
If you don’t have any way of doing this, you don’t have a moral theory—you have nothing.