Reframing Average Utilitarianism
World A has a million people with an average utility of 10; world B has 100 people with an average utility of 11. Average utilitarianism says world B is preferable to world A. This seems counterintuitive as it has less total utility, but what if we reframe the question?
Imagine you are behind a veil of ignorance and you have to choose which world you will be instantiated into, becoming one citizen randomly selected from the population. From this perspective, world B is the obvious choice: even though it has far less total utility than word A, you personally get more utility by being instantiated into world B. This remains true even if world B only has 1 citizen, though most people, presumably, have “access to good company” in their utility function.
This reframing seems to invert my intuitions. Though this may just mean I am more selfish than most.
The line of reply I tend to make here goes something along these lines:
If you’re looking at a veil of ignorance, and choosing between world A or world B, it seems you should be also veiled to whether or not you exist. Whatever probability p you have to exist in world B, you have 10000p the probability of existing in world A, because it has 10000x the number of existing people. So opting for world B seems a much better bet.
As an aside, average views tend not to be considered ‘goers’ by those who specialise in population ethics, but for different reasons:
1) It looks weird with negative numbers. You can make a world where everyone has lives not worth living (-10, say) by adding people whose lives are also not worth living, but not quite as bad (e.g. −9).
2) It also looks pretty weird with positive numbers. Mere addition seems like it should be okay, even if it drags down the average.
3) Inseparability also looks costly. If the averaging is over all beings, then the decision as to whether humanity should euthanise itself becomes intimately sensitive to whether there’s an alien species in the next supercluster who are blissfully happy.
1)
Just be sure I’m understanding you correctly, what you’re saying is average utilitarianism prescribes creating lives that are not worth living so long as they are less horrible than average. This does seem weird. Creating a life that is not worth living should be proscribed by any sane rule!
2)
I don’t find this objection super compelling. Isn’t the reason average utilitarianism was proposed was because people find mere addition unattractive?
3)
Another fine point. People with lives worth living shouldn’t feel the need to suicide when they learn they are dragging down the average. I believe average preference utilitarianism is a patch for this though.
I can think of various patches, but I should probably read more on the topic first. Do you have any recommendations for a textbook or a book on population ethics?
Average utilitarianism does not prescribe “creating lives that are not worth living” as long as they are better than average. Rather, it says that a life is worth living if it is better than average, and not worth living if it is worse than average.
Which of course is one of the most absurd claims ever made.
I don’t think average utilitarianism says that at all. It says that adding a new life to the universe is a good thing overall if that new life is better than average, but why should we equate “worth living” with “on the whole, improves the universe by existing”?
Because you just said “is a good thing overall”. It doesn’t make sense to say that a life is a good thing overall, but is not worth living.
I don’t see why not. (I can see some handwavy intuitive arguments for why not, but they implicitly assume something more like total utilitarianism than like average utilitarianism.)
If someone’s life is (for them) delightful and fulfilling, then it seems to me eminently reasonable to call it “worth living” whatever its other consequences. (Of course this means that when contemplating a life we should consider other things besides whether it’s worth living.)
Counterargument 1: “If they get that delight and fulfilment at the cost of greater net harm to other people, then they are morally bankrupt and that on its own makes their life not worth living.”
Reponse: Maybe. But someone whose only offence is that their life is less than averagely good isn’t exhibiting any moral flaw on that account, so this is irrelevant.
Counterargument 2: “If your life does more harm than good then I don’t care how nice it is for you, it’s bad overall and that means it isn’t worth living, not for moral reasons but just because a life that does net harm can’t be a good one.”
Response: Maybe. But “more harm than good” only follows from “make the world worse overall” if you assume something more Total than Average, so you can’t use this as an argument against average utilitarianism without begging the question.
Counterargument 3: “No, look, this really isn’t about good minus bad. If the effect of your life is to make the world worse then it just can’t be worth living.”
Response: That isn’t a counterargument, it’s just a restatement of the claim I’m arguing against.
It seems like the question is whether average utilitarianism is supposed to recommend actions, or is just a way of measuring the amount of utility in the world. If it is just a way of measuring, there is not necessary any conflict between average and total utilitarianism: you can say both “this is the average amount of utility”, and “this is the total amount of utility.” But my understanding is that they are supposed to recommend actions: average utilitarianism would say, “it is good to do things that increase the average,” and “it is bad to do things that decrease the average,” and total utilitarianism would say, “it is good to do things that increase the total,” and “it is bad to do things that decrease the total.”
Taken in this way, I actually don’t agree with either kind, but I think that total utilitarianism is more reasonable, because it would tend to recommend good things and advise against bad things, while the average kind would depend on what I would consider to be irrelevant facts.
In any case, you could say that this still wouldn’t say anything about whether a life is worth living, because “living your life” is not actually something you can choose to do or not do. Still, taken in this way average utilitarianism would say it is good to produce lives that are above the average, and bad to produce lives that are below it. So if we assume that you should want good things to happen and not bad ones, then you should have general desires in favor of living a life above average, and of not living a life below average. In other words, suppose you have a chance to be reincarnated, and you are asked, “do you want to live this life, or would you prefer not to exist?” If you prefer to live the life even though it is below average, you are not following the recommendations of average utilitarianism, at least if I am right about the recommendations as stated above.
I agree with you that whether or not a life is worth living depends on the content of that life, and not on the average. But that does seem to me to refute average utilitarianism taken as recommending actions. I may not have a choice about my own life, but I have a choice about others: e.g. If I have a child who has a life which is worth living, that seems to me like it is definitely a good thing to do, precisely because it is good for the child.
Not that long ago I read something where someone claimed that employees have a “right to a living wage,” and explained this to mean “one that allows them at least an average standard of living in the society in which they live.” But this is absurd for purely mathematical reasons: in any society 50% of the people will be living at a standard below average, so you cannot possibly say that everyone has a right to the average. I think average utilitarianism is absurd in a very similar way, except that it is not about rights. A measure of goodness that automatically says that 50% of lives are bad lives, no matter how good they are, is not a good measure of goodness.
No indeed; you are favouring your own interests rather than producing the best possible world. As, in fact, we all do most of the time, whether or not we are average utilitarians.
If you are considering the welfare of some specific person who specifically matters to you (e.g., your own child, whether actual or hypothetical) then again you are probably not actually going to be aiming to maximize average utility, whether or not you are an average utilitarian.
If, considering people to whom you have no particular connection, your intuition is that it would always be a good thing to add to the world a life that (seen from the inside) is worth living—well, that means you aren’t an average utilitarian. That’s obviously OK; most people are not average utilitarians. But “entirelyuseless is not an average utilitarian” is a quite different claim from “average utilitarianism is absurd”.
Again: average utilitarianism does not say that they are bad lives. It says that if you are choosing between a world with them and an otherwise identical world without them, you should choose the latter. That is not the same statement.
This has always been how I thought about it. (I consider myself approximately an Average Utilitarian, with some caveats that this is more descriptive than normative)
One person who disagreed with me said: “you are not randomly born into ‘a person’, you are randomly born into ‘a collection of atoms.’ A world with fewer atoms arranged into thinking beings increases the chance that you get zero utility, not whatever the average utility from among whichever patterns have developed consciousness”
I disagree with that person, but just wanted to float it as an alternate way of thinking.
It’s the difference between SIA and SSA. If you work with SIA, then you’re randomly chosen from all possible beings, and so in world B you’re less likely to exist.
Sure. By hypothesis, you would like to make the future more like world B, on the assumption that you get to live there. That’s what “higher utility” means.
But this set of assumptions seems a little too strong to be helpful when deciding questions like “is it the right thing to create a person who’s going to have a fairly average set of experiences?” Because someone with total-utilitarian preferences would do a better job of satisfying their own preferences (i.e. have a higher utility score) by creating this person.
So the question is not “Would I want there to be more people if it lowered my utility-number?” The question is “Would there being more people lower my utility-number?”