[link] Choose your (preference) utilitarianism carefully – part 1
Summary: Utilitarianism is often ill-defined by supporters and critics alike, preference utilitarianism even more so. I briefly examine some of the axes of utilitarianism common to all popular forms, then look at some axes unique but essential to preference utilitarianism, which seem to have received little to no discussion – at least not this side of a paywall. This way I hope to clarify future discussions between hedonistic and preference utilitarians and perhaps to clarify things for their critics too, though I’m aiming the discussion primarily at utilitarians and utilitarian-sympathisers.
http://valence-utilitarianism.com/?p=8
I like this essay particularly for the way it breaks down different forms of utilitarianism to various axes, which have rarely been discussed on LW much.
For utilitarianism in general:
Many of these axes are well discussed, pertinent to almost any form of utilitarianism, and at least reasonably well understood, and I don’t propose to discuss them here beyond highlighting their salience. These include but probably aren’t restricted to the following:
What is utility? (for the sake of easy reference, I’ll give each axis a simple title – for this, the utility axis); eg happiness, fulfilled preferences, beauty, information(PDF)
How drastically are we trying to adjust it?, aka what if any is the criterion for ’right’ness? (sufficiency axis); eg satisficing, maximising[2], scalar
How do we balance tradeoffs between positive and negative utility? (weighting axis); eg, negative, negative-leaning, positive (as in fully discounting negative utility – I don’t think anyone actually holds this), ‘middling’ ie ‘normal’ (often called positive, but it would benefit from a distinct adjective)
What’s our primary mentality toward it? (mentality axis); eg act, rule, two-level, global
How do we deal with changing populations? (population axis); eg average, total
To what extent do we discount future utility? (discounting axis); eg zero discount, >0 discount
How do we pinpoint the net zero utility point? (balancing axis); eg Tännsjö’s test, experience tradeoffs
What is a utilon? (utilon axis) [3] – I don’t know of any examples of serious discussion on this (other than generic dismissals of the question), but it’s ultimately a question utilitarians will need to answer if they wish to formalise their system.
For preference utilitarianism in particular:
Here then, are the six most salient dependent axes of preference utilitarianism, ie those that describe what could count as utility for PUs. I’ll refer to the poles on each axis as (axis)0 and (axis)1, where any intermediate view will be (axis)X. We can then formally refer to subtypes, and also exclude them, eg ~(F0)R1PU, or ~(F0 v R1)PU etc, or represent a range, eg C0..XPU.
How do we process misinformed preferences? (information axis F)
(F0 no adjustment / F1 adjust to what it would have been had the person been fully informed / FX somewhere in between)
How do we process irrational preferences? (rationality axis R)
(R0 no adjustment / R1 adjust to what it would have been had the person been fully rational / RX somewhere in between)
How do we process malformed preferences? (malformation axes M)
(M0 Ignore them / MF1 adjust to fully informed / MFR1 adjust to fully informed and rational (shorthand for MF1R1) / MFxRx adjust to somewhere in between)
How long is a preference relevant? (duration axis D)
(D0 During its expression only / DF1 During and future / DPF1 During, future and past (shorthand for DP1F1) / DPxFx Somewhere in between)
What constitutes a preference? (constitution axis C)
(C0 Phenomenal experience only / C1 Behaviour only / CX A combination of the two)
What resolves a preference? (resolution axis S)
(S0 Phenomenal experience only / S1 External circumstances only / SX A combination of the two)
What distinguishes these categorisations is that each category, as far as I can perceive, has no analogous axis within hedonistic utilitarianism. In other words to a hedonistic utilitarian, such axes would either be meaningless, or have only one logical answer. But any well-defined and consistent form of preference utilitarianism must sit at some point on every one of these axes.
See the article for more detailed discussion about each of the axes of preference utilitarianism, and more.
- 8 Jul 2015 11:14 UTC; 4 points) 's comment on Effective Altruism from XYZ perspective by (
- 7 Oct 2022 23:00 UTC; 1 point) 's comment on Four Quotes on Preference Utilitarianism by (EA Forum;
I always just want to know: how do you propose to naturalize utilitarianism, thus showing your normative questions to actually be factual ones, thus showing that your normative claims are in fact grounded?
I would like to write an essay about that eventually, but I figured persuading PUs of the merits of HU was lower hanging fruit.
For what it’s worth, I have a lot of sympathy with your scepticism—I would rather (and believe it possible to) build a system resembling ethics up without reference to normativity, ‘oughts’, or any of their associated baggage. I think the trick will be to properly understand the overlap of ethics and epistemology, both of which are subject to similar questions (how do we non question-beggingly ‘ground’ ‘factual’ questions?), but the former of whose questions people disproportionately emphasise.
[ETA] It’s also hard to pin down what the null hypothesis would be. Calling it ‘nihilism’ of any kind is just defining the problem away. For eg, if you just decide you want to do something nice for your friend—in the sense of something beneficial for her, rather than just picking an act that will give you warm fuzzies—then your presumption of what category of things would be ‘nice for her’ implicitly judges how to group states of the world. If you also feel like some things you might do would be nicer for her than others, then you’re judging how to order states of the world.
This already has the makings of a ‘moral system’, even though there’s not a ‘thou shalt’ in sight. If you further think that how she’ll react to whatever you do for her can corroborate/refute your judgement of what things are nice(r than others) for her, your system seems to have, if not a ‘realist’ element, at least a non purely antirealist/subjectivist one. It’s not utilitarianism (yet), but it seems to be heading in that sort of direction.
Very true! And this is precisely why I’m outright suspicious of non-naturalistic theoretical ethics and it’s magical “oughts”. In my case, in fact, I’m especially suspicious of Peter Singer and his simplistic form of hedonic utilitarianism, because it seems to me to rely overmuch on intuition pumps and too little on naturalized descriptions of how actual agents judge value.
Good thing Bayesians don’t need to identify the null hypothesis.
Upvoted for mentioning that ethics and epistemology are subject to similar questions. That’s a huge insight, familiar in academic philosophy, but AFAICT rare among self-identified rationalists and little discussed on lesswrong.
Of course, the academic philosophy way to handle the insight has usually been worse than useless: take the Mysterious Phenomenon of “epistemic normativity” as reason to believe in metaphysically basic moral normativity, then use that to ground epistemology, and thus go from one field that can be naturalized and one that is claimed to remain a mystery, to −1 fields naturalized and two fields made Mysteriously Metaphysical.
Whatever you call it, they’ve got to identify some alternative, even if only tacitly by following some approximation of it in their daily life.