Additional/complementary argument in favour (and against the “any difference you make is marginal” argument): one’s personal example of viable veganism increases the chances of others becoming vegan (or partially so, which is still a benefit). Under plausible assumptions this effect could be (potentially much) larger the the direct effect of personal consumption decisions.
conchis
I have to say that the claimed reductios here strike me as under-argued, particularly when there are literally decades of arguments articulating and defending various versions of moral anti-realism, and which set out a range of ways in which the implications, though decidedly troubling, need not be absurd.
His 2018 lectures are also available on youtube and seem pretty good so far if anyone wants a complement to the book. The course website also has lecture notes and exercises.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don’t think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
To be honest, I’m actually not entirely averse to the latter option: having interpersonal trade-offs determined by contingent individual risk-preferences has never seemed especially well-justified to me (particularly if probability is in the mind). But I confess it’s not clear whether that route is open to you, given the motivation for your system as a whole.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things “infinitely more” than others.
That makes sense, thanks.
So, I don’t think your concern about keeping utility functions bounded is unwarranted; I’m just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
Agreed!
you just need to make it so the supremum of them their value is 1 and the infimum is 0.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you’re a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or in expectation
Yes, and I am obviously not proposing a solution to this problem! More just suggesting that, if there are infinities in the problem that appear to correspond to actual things we care about, then defining them out of existence seems more like deprioritising the problem than solving it.
The utility monster feels an incredibly strong need to have everyone on Earth be tortured
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don’t think it’s necessary to do this: unboundedness below means there’s a sense in which everyone is a potential “negative utility monster” if you torture them long enough. I think the core issue here is whether there’s some point at which we just stop caring, or whether that’s morally repugnant.
in order to act, you need more than just a consistent preference order over possible universe. In reality, you only get to choose between probability distributions over possible worlds, not specific possible worlds
Sorry, sloppy wording on my part. The question should have been “does this actually prevent us having a consistent preference ordering over gambles over universes” (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that “no-real-valued-utility-function-we-are-maximising-expectations-of” does not immediately imply “no-consistent-preference-ordering” (if we’re willing to accept orderings that violate continuity). So pointing to undefined expectations doesn’t seem to immediately rule out consistent choice.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
I’ll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1.
Thanks. I’ve toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they’re not unique and therefore incomparable across individuals. However, this method seems fragile in relying on a finite number of scenarios: doesn’t it break if it’s possible to imagine something worse than whatever the currently worst scenario is? (E.g. just keep adding 50 more years of torture.) While this might be a reasonable approximation in some circumstances, it doesn’t seem like a fully coherent solution to me.
This seems pretty horrible to me, so I’m satisfied with keeping the measure of life satisfaction to be bounded.
IMO, the problem highlighted by the utility monster objection is fundamentally a prioritiarian one. A transformation that guarantees boundedness above seems capable of resolving this, without requiring boundedness below (and thus avoiding the problematic consequences that boundedness below introduces).
Further, suppose you do decide to have an unbounded measure of life satisfaction
Given issues with the methodology proposed above for constructing bounded satisfaction functions, it’s still not entirely clear to me that this is really a decision, as opposed to an empirical question (which we then need to decide how to cope with from a normative perspective). This seems like it may be a key difference in our perspectives here.
So, if you’re trying to maximize the expected moral value of the universe, you won’t be able to. And, as a moral agent, what else are you supposed to do?
Well, in general terms the answer to this question has to be either (a) bite a bullet, or (b) find another solution that avoids the uncomfortable trade-offs. It seems to me that you’ll be willing to bite most bullets here. (Though I confess it’s actually a little hard for me to tell whether you’re also denying that there’s any meaningful tradeoff here; that case still strikes me as less plausible.) If so, that’s fine, but I hope you’ll understand why to some of us that might feel less like a solution to the issue of infinities, than a decision to just not worry about them on a particular dimension. Perhaps that’s ultimately necessary, but it’s definitely non-ideal from my perspective.
A final random thought/question: I get that we can’t expected utility maximise unless we can take finite expectations, but does this actually prevent us having a consistent preference ordering over universes, or is it potentially just a representation issue? I would have guessed that the vNM axiom we’re violating here is continuity, which I tend to think of as a convenience assumption rather than an actual rationality requirement. (E.g. there’s not really anything substantively crazy about lexicographic preferences as far as I can tell, they’re just mathematically inconvenient to represent with real numbers.) Conflating a lack of real-valued representations with lack of consistent preference orderings is a fairly common mistake in this space. That said, if it were just really just a representation issue, I would have expected someone smarter than me to have noticed by now, so (in lieu of actually checking) I’m assigning that low probability for now.
Re boundedness:
It’s important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they’re already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.
I realise now that I may have moved through a critical step of the argument quite quickly above, which may be why this quote doesn’t seem to capture the core of the objection I was trying to describe. Let me take another shot.
I am very much not suggesting that 50 years of torture does virtually nothing to [life satisfaction—or whatever other empirical value you want to take as axiologically primitive; happy to stick with life satisfaction as a running example]. I am suggesting that 50 years of torture is terrible for [life satisfaction]. I am then drawing a distinction between [life-satisfaction] and the output of the utility function that you then take expectations of. The reason I am doing this, is because it seems to me that whether [life satisfaction] is bounded is a contingent empirical question, not one that can be settled by normative fiat in order to make it easier to take expectations.
If, as a matter of empirical fact, [life satisfaction] is bounded, then the objection I describe will not bite.
If, on the other hand [life-satisfaction] is not bounded, then requiring the utility function you take expectations of to be bounded forces us to adopt some form of sigmoid mapping from [life satisfaction] to “utility”, and this in turn forces us, at some margin, to not care about things that are absolutely awful (from the perspective of [life satisfaction]). (If an extra 50 years of torture isn’t sufficient awful for some reason, then we just need to pick something more awful for the purposes of the argument).
Perhaps because I didn’t explain this very well the first time, what’s not totally clear to me from your response, is whether you think:
(a) [life satisfaction] is in fact bounded; or
(b) even if [life satisfaction] is unbounded, it’s actually ok to not care about stuff that is absolutely (infinitely?) awful from the perspective of [life-satisfaction] because it lets us take expectations more conveniently. [Intentionally provocative framing, sorry. Intended as an attempt to prompt genuine reflection, rather than to score rhetorical points.]
It’s possible that (a) is true, and much of your response seems like it’s probably (?) targeted at that claim, but FWIW, I don’t think this case can be convincingly made by appealing to contingent personal values: e.g. suggesting that another 50 years of torture wouldn’t much matter to you personally won’t escape the objection, as long as there’s a possible agent who would view their life-satisfaction as being materially reduced in the same circumstances.
Suggesting evolutionary bounds on satisfaction is another potential avenue of argument, but also feels too contingent to do what you really want.
Maybe you could make a case for (a) if you were to substitute a representation of individual preferences for [life satisfaction]? I’m personally disinclined towards preferences as moral primitives, particularly as they’re not unique, and consequently can’t deal with distributional issues, but YMMV.
ETA: An alternative (more promising?) approach could be to accept that, while it may not cover all possible choices, in practice we’re more likely to face choices with an infinite extensive margin than with an infinite intensive margin, and that the proposed method could be a reasonable decision rule for such choices. Practically, this seems like it would be acceptable as long as whatever function we’re using to map [life-satisfaction] into utility isn’t a sigmoid over the relevant range, and instead has a (weakly) negative second derivative over the (finite) range of [life satisfaction] covered by all relevant options.
(I assume (in)ability-to-take-expectations wasn’t intended as an argument for (a), as it doesn’t seem up to making such an empirical case?)
On the other hand, if you’re actually arguing for (b), then I guess that’s a bullet you can bite; though I think I’d still be trying to dodge it if I could. ETA: If there’s no alternative but to ignore infinities on either the intensive or extensive margin, I could accept choosing the intensive margin, but I’m inclined think this choice should be explicitly justified, and recognised as tragic if it really can’t be avoided.
Re the repugnant conclusion: apologies for the lazy/incorrect example. Let me try again with better illustrations of the same underlying point. To be clear, I am not suggesting these are knock-down arguments; just that, given widespread (non-infinitarian) rejection of average utilitarianisms, you probably want to think through whether your view suffers from the same issues and whether you are ok with that.
Though there’s a huge literature on all of this, a decent starting point is here:
However, the average view has very little support among moral philosophers since it suffers from severe problems.
First, consider a world inhabited by a single person enduring excruciating suffering. The average view entails that we could improve this world by creating a million new people whose lives were also filled with excruciating suffering if the suffering of the new people was ever-so-slightly less bad than the suffering of the original person.26
Second, the average view entails the sadistic conclusion: It can sometimes be better to create lives with negative wellbeing than to create lives with positive wellbeing from the same starting point, all else equal.
Adding a small number of tortured, miserable people to a population diminishes the average wellbeing less than adding a sufficiently large number of people whose lives are pretty good, yet below the existing average...
Third, the average view prefers arbitrarily small populations over very large populations, as long as the average wellbeing was higher. For example, a world with a single, extremely happy individual would be favored to a world with ten billion people, all of whom are extremely happy but just ever-so-slightly less happy than that single person.
Fair point re use cases! My familiarity with DSGE models is about a decade out-of-date, so maybe things have improved, but a lot of the wariness then was that typical representative-agent DSGE isn’t great where agent heterogeneity and interactions are important to the dynamics of the system, and/or agents fall significantly short of the rational expectations benchmark, and that in those cases you’d plausibly be better of using agent-based models (which has only become easier in the intervening period).
I (weakly) believe this is mainly because econometrists mostly haven’t figured out that they can backpropagate through complex models
Plausible. I suspect the suspicion of fitting more complex models is also influenced by the fact that there’s just not that much macro data + historical aversion to regularisation approaches that might help mitigate the paucity of data issues + worries that while such approaches might be ok for the sort of prediction tasks that ML is often deployed for, they’re more risky for causal identification.
My point was more that, even if you can calculate the expectation, standard versions of average utilitarianism are usually rejected for non-infinitarian reasons (e.g. the repugnant conclusion) that seem like they would plausibly carry over to this proposal as well. I haven’t worked through the details though, so perhaps I’m wrong.
Separately, while I understand the technical reasons for imposing boundedness on the utility function, I think you probably also need a substantive argument for why boundedness makes sense, or at least is morally acceptable. Boundedness below risks having some pretty unappealing properties, I think.
Arguments that utility functions are in fact bounded in practice seem highly contingent, and potentially vulnerable e.g. to the creation of utility-monsters, so I assume what you really need is an argument that some form of sigmoid transformation from an underlying real-valued welfare, u = s(w), is justified.
On the one hand, the resulting diminishing marginal utility for high-values of welfare will likely be broadly acceptable to those with prioritarian intuitions. But I don’t know that I’ve ever seen an argument for the sort of anti-prioritarian results you get as a result of increasing marginal utility at very low levels of welfare. Not only would this imply that there’s a meaningful range where it’s morally required to deprioritise the welfare of the worse off, this deprioritisation is greatest for the very worst off. Because the sigmoid function essentially saturates at very low levels of welfare, at some point you seem to end up in a perverse version of Torture vs. dust specks where you think it’s ok (or indeed required) to have 3^^^3 people (whose lives are already sufficiently terrible) horribly tortured for fifty years without hope or rest, to avoid someone in the middle of the welfare distribution getting a dust speck in their eye. This seems, well, problematic.
Worth noting that many economists (including e.g. Solow, Romer, Stiglitz among others) are pretty sceptical (to put it mildly) about the value of DSGE models (not without reason, IMHO). I don’t want to suggest that the debate is settled one way or the other, but do think that the framing of the DSGE approach as the current state-of-the-art at least warrants a significant caveat emptor. Afraid I am too far from the cutting edge myself to have a more constructive suggestion though.
This sounds essentially like average utilitarianism with bounded utility functions. Is that right? If so, have you considered the usual objections to average utilitarianism (in particular, re rankings over different populations)?
Have you read s1gn1f1cant d1g1t5?
There is no value to a superconcept that crosses that boundary.
This doesn’t seem to me to argue in favour of using wording that’s associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn’t belong there.
Two additional things, FWIW:
(1) There’s a lot of existing literature that distinguishes between “decision utility” and “experienced utility” (where “decision utility” corresponds to preference representation) so there is an existing terminology already out there. (Although “experienced utility” doesn’t necessarily have anything to do with preference or welfare aggregation either.)
(2) I view moral philosophy as a special case of decision theory (and e.g. axiomatic approaches and other tools of decision theory have been quite useful in to moral philosophy), so to the extent that your firewall intends to cut that off, I think it’s problematic. (Not sure that’s what you intend—but it’s one interpretation of your words in this comment.) Even Harsanyi’s argument, while flawed, is interesting in this regard (it’s much more sophisticated than Phil’s post, so I’d recommend checking it out if you haven’t already.)
I’m hesitant to get into a terminology argument when we’re in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)
Yes, it’s annoying when people use the word ‘fruit’ to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I’d suggest that it’s not the most useful response to this problem to insist on using the word ‘fruit’ to refer exclusively to apples, and to proceed to make claims like ‘fruit can’t be orange coloured’ that are false for some types of fruit. (Even more so when people have been using the word ‘fruit’ to refer to oranges for longer than they’ve been using it to refer to apples.) Aren’t you just making it more difficult for people to get your point that apples and oranges are different?
On your current approach, every time you make a claim about fruit, I have to try to figure out from context whether you’re really making a claim about all fruit, or just apples, or just oranges. And if I guess wrong, we just end up in a pointless and avoidable argument. Surely it’s easier to instead phrase your claims as being about apples and oranges directly when they’re intended to apply to only one type of fruit?
P.S. For the avoidance of doubt, and with apologies for obviousness: fruit=utility, apples=decision utility, oranges=substantive utility.
While I’m in broad agreement with you here, I’d nitpick on a few things.
Different utility functions are not commensurable.
Agree that decision-theoretic or VNM utility functions are not commensurable—they’re merely mathematical representations of different individuals’ preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility of such comparisons, any social welfare function you try to construct will likely end up running afoul of Arrow’s impossibility theorem).
Translate the axioms into statements about people. Do they still seem reasonable?
I’m actually pretty much OK with Axioms 1 through 3 being applied to a population social welfare function. As Wei Dai pointed out in the linked thread (and Sen argues as well), it’s 4 that seems the most problematic when translated to a population context. (Dealing with varying populations tends to be a stumbling block for aggregationist consequentialism in general.)
That said, the fact that decision utility != substantive utility also means that even if you accepted that all 4 VNM axioms were applicable, you wouldn’t have proven average utilitarianism: the axioms do not, for example, rule out prioritarianism (which I think was Sen’s main point).
Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we’ve discussed ad nauseam before)? In response to an argument of Harsanyi’s that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.
If not, some useful references here.
ETA: I worry that I’ve unduly maligned Harsanyi by associating his argument too heavily with Phil’s post. Although I still think it’s wrong, Harsanyi’s argument is rather more sophisticated than Phil’s, and worth checking out if you’re at all interested in this area.
I can see the appeal, but I worry that a metaphor where a single person is given a single piece of software, and has an option to rewrite it for their own and/or others’ purpose without grappling with myriad upstream and downstream dependencies, vested interests, and so forth is probably missing an important part of the dynamics of real world systems?
(This doesn’t really speak to moral obligations to systems, as much as practical challenges doing anything about them, but my experience is that the latter is a much more binding constraint.)