I don’t see “utility” or “utilitarianism” as meaningless or nearly meaningless words. “Utility” often refers to von Neumann–Morgenstern utilities and always refers to some kind of value assigned to something by some agent from some perspective that they have some reason to find sufficiently interesting to think about. And most ethical theories don’t seem utilitarian, even if perhaps it would be possible to frame them in utilitarian terms.
I can’t say I’m surprised a utilitarian doesn’t realize how vague it sounds? It is a jargon taken from a word that simply means ability to be used widely? Utility is an extreme abstraction, literally unassignable, and entirely based on guessing. You’ve straightforwardly admitted that it doesn’t have an agreed upon basis. Is it happiness? Avoidance of suffering? Fulfillment of the values of agents? Etc.
Utilitarians constantly talk about monetary situations, because that is one place they can actually use it and get results? But there, it’s hardly different than ordinary statistics. Utility there is often treated as a simple function of money but with diminishing returns. Looking up the term for the kind of utility you mentioned, it seems to once again only use monetary situations as examples, and sources claimed it was meant for lotteries and gambling.
Utility as a term makes sense there, but is the only place where your list has general agreement on what utility means? That doesn’t mean it is a useless term, but it is a very vague one.
Since you claim there isn’t agreement on the other aspects of the theories, that makes them more of an artificial category where the adherents don’t really agree on anything. The only real connection seems to be wanting to do math on on how good things are?
The only real connection seems to be wanting to do math on on how good things are?
Yes, to me utilitarian ethical theories do seem usually more interested in formalizing things. That is probably part of their appeal. Moral philosophy is confusing, so people seek to formalize it in the hope of understanding things better (that’s the good reason to do it, at least; often the motivation is instead academic, or signaling, or obfuscation). Consider Tyler Cowen’s review of Derek Parfit’s arguments in On What Matters:
Parfit at great length discusses optimific principles, namely which specifications of rule consequentialism and Kantian obligations can succeed, given strategic behavior, collective action problems, non-linearities, and other tricks of the trade. The Kantian might feel that the turf is already making too many concessions to the consequentialists, but my concern differs. I am frustrated with this very long and very central part of the book, which cries out for formalization or at the very least citations to formalized game theory.
If you’re analyzing a claim such as — “It is wrong to act in some way unless everyone could rationally will it to be true that everyone believes such acts to be morally permitted” (p.20) — words cannot bring you very far, and I write this as a not-very-mathematically-formal economist.
Parfit is operating in the territory of solution concepts and game-theoretic equilibrium refinements, but with nary a nod in their direction. By the end of his lengthy and indeed exhausting discussions, I do not feel I am up to where game theory was in 1990.
I don’t see “utility” or “utilitarianism” as meaningless or nearly meaningless words. “Utility” often refers to von Neumann–Morgenstern utilities and always refers to some kind of value assigned to something by some agent from some perspective that they have some reason to find sufficiently interesting to think about. And most ethical theories don’t seem utilitarian, even if perhaps it would be possible to frame them in utilitarian terms.
I can’t say I’m surprised a utilitarian doesn’t realize how vague it sounds? It is a jargon taken from a word that simply means ability to be used widely? Utility is an extreme abstraction, literally unassignable, and entirely based on guessing. You’ve straightforwardly admitted that it doesn’t have an agreed upon basis. Is it happiness? Avoidance of suffering? Fulfillment of the values of agents? Etc.
Utilitarians constantly talk about monetary situations, because that is one place they can actually use it and get results? But there, it’s hardly different than ordinary statistics. Utility there is often treated as a simple function of money but with diminishing returns. Looking up the term for the kind of utility you mentioned, it seems to once again only use monetary situations as examples, and sources claimed it was meant for lotteries and gambling.
Utility as a term makes sense there, but is the only place where your list has general agreement on what utility means? That doesn’t mean it is a useless term, but it is a very vague one.
Since you claim there isn’t agreement on the other aspects of the theories, that makes them more of an artificial category where the adherents don’t really agree on anything. The only real connection seems to be wanting to do math on on how good things are?
Yes, to me utilitarian ethical theories do seem usually more interested in formalizing things. That is probably part of their appeal. Moral philosophy is confusing, so people seek to formalize it in the hope of understanding things better (that’s the good reason to do it, at least; often the motivation is instead academic, or signaling, or obfuscation). Consider Tyler Cowen’s review of Derek Parfit’s arguments in On What Matters: