A number of days ago I was arguing with AngryParsley about how to value future actions; I thought it was obvious one should maximize the total utility over all people the action affected, while he thought it equally self-evident that maximizing average utility was better still. When I went to look, I couldn’t see any posts on LW or OB on this topic.
(I pointed out that this view would favor worlds ruled by a solitary, but happy, dictator over populous messy worlds whose average just happens to work out to be a little less than a dictator’s might be; he pointed out that if total was all that mattered, we might wind up favoring worlds where everyone is just 2 utilons away from committing suicide.)
You’ve already decided where to put zero when you say this:
If zero is very high, then total utility maximization = it’s kill everyone.
That means that zero is the utility of not existing. Granted, it’s a lot easier to compare two different possible lives than it is to compare a possible life to that life not coming into existence, but by saying “kill anyone whose utility is less than zero” you’re defining zero utility as the utility of a dead person.
Also,
If zero is average utility, then total utility maximization = doesn’t matter what you do
does not make sense to me. Utility is relative, yes, but it’s relative to states of the universe, not to other people. If average utility is currently zero, and then, let’s say, I recover from an illness than has been causing me distress, then my personal utility has increased, and average utility is no longer zero. Other people don’t magically lose utility when I happen to gain some. Total utility doesn’t renormalize in the way you seem to think it does.
If zero is very low, then total utility maximization = make as many people as possible
Repugnant conclusion certainly is worth discussing, but the other two:
If zero is very high, then total utility maximization = it’s kill everyone
I think it would be a very bad idea to have a utility function such that the utility of an empty universe is higher than the utility of a populated non-dystopia; so any utility function for the universe that I might approve should have a pretty hefty negative value for empty universes. I don’t think that’s too awful of a requirement.
If zero is average utility, then total utility maximization = doesn’t matter what you do
This looks like a total non sequitur to me. What do you mean?
He means that if utility is measured in such a way that average utility is always zero, then total utility is always zero too, average utility being total utility divided number of agents.
So where do you put zero? By this one completely arbitrary decision you can collapse total utility maximization to one of these cases.
It gets far worse when you try to apply it to animals.
As for zero being very high, I’ve actually heard many times this argument about existence of farm animals, which supposedly suffer so much that it would be better if they didn’t exist. It can as easily be applied to wild animals, even though it’s far less common to do so.
With animal zero very low, total utility maximization turns us into paperclip maximizer of insects, or whatever is the simplest utility-positive life.
If non-existent beings have exactly zero utility—that any being with less than zero utility ought not to have come into existence—then the choice of where to put zero is clearly not arbitrary.
Yeah, that doesn’t surprise me. But the context of our discussion was certainly different! I had suggested to AngryParsley that even if we had next to no understanding of how to modify our minds for the better, uploading would still be useful since we could make 10 copies of ourselves with semi-random changes, and let only the best one propagate; he objected that how did I plan to get rid of the excess 9? By murder was plainly awfully immoral, and my suggestion of forcing them to live out the standard 4 score and 10 only somewhat less so—by not allowing them to be immortal or whatever the selected copy would get, I would be lowering the average. (Going by totals, this of course isn’t an issue.)
The Mere Addition Paradox suffices to refute the AVG view. From Nick’s link:
Scenario A contains a population in which everybody leads lives well worth living. In A+ there is one group of people as large as the group in A and with the same high quality of life. But A+ also contains a like number of people with a somewhat lower quality of life. In Parfit’s terminology A+ is generated from A by “mere addition”. Comparing A and A+ it is reasonable to hold that A+ is better than A or, at least, not worse.
For example, A+ could evolve from A by the choice of some parents to have children whose quality of life is good, though not as good as the average in A. We can even suppose that this makes the parents a little happier, while still lowering the overall average.
And you are right, by Jove, these philosophers really like to go on about it—ie.the whole issue could be summarized as the question whether we should optimize for AVG(good) or for SUM(good) -- and some variations. A question that ultimately cannot be answered. The length of the bibliography makes it almost comical.
The main problem I have with AVG is that it implies that as population increases, inherent value of each individual decreases. Why should you be suddenly less important simply because someone else was just born? (I don’t mean the instrumental components of your value but your inherent value)
What you “should” do depends on what your goal is.
Most biological organisms don’t maximise either of your proposed functions—their utility function is down to how many great grandchildren they have—not how many people they help.
I don’t really see how what most organisms do is relevant; we’re discussing what’s moral/ethical for human beings. This is quite relevant to deciding whether to say, help out Africa (which with its very high birth rates is equivalent to plumping for total) or work on issues in the rest of the world (average).
As I understand it, there is widespread disagreement on that issue. Most humans don’t seem to have a clear idea of what their goals are, and of those that do, there is considerable disagreement about what those goals are.
Scientists can model human goals. The result seems to be that humans act so as to try to maximise their genes—and sometimes the memes they have been infected with. Basically all goal-directed behaviour is the result of some optimisation process—and in biology, that process usually involves differential reproductive success of replicators.
Human goal-seeking behaviour thus depends on the details of the memes the humans have been infected with—which mostly explains why humans vocalise a diverse range of goals.
Humans often spread the patterns they copy while working from hypotheses about how the world works that are long out of date. Also organisms often break and malfunction as a result of developmental problems and/or environmental stresses—so these theories are not always as good as we would like.
A number of days ago I was arguing with AngryParsley about how to value future actions; I thought it was obvious one should maximize the total utility over all people the action affected, while he thought it equally self-evident that maximizing average utility was better still. When I went to look, I couldn’t see any posts on LW or OB on this topic.
(I pointed out that this view would favor worlds ruled by a solitary, but happy, dictator over populous messy worlds whose average just happens to work out to be a little less than a dictator’s might be; he pointed out that if total was all that mattered, we might wind up favoring worlds where everyone is just 2 utilons away from committing suicide.)
Have we really never discussed this topic?
Total utility has obvious problem—it’s only meaningful to talk about relative utilities so where do we put zero? (as it’s completely arbitrary)
If zero is very low, then total utility maximization = make as many people as possible
If zero is very high, then total utility maximization = it’s kill everyone
If zero is average utility, then total utility maximization = doesn’t matter what you do
None of the three make any sense whatsoever.
You’ve already decided where to put zero when you say this:
That means that zero is the utility of not existing. Granted, it’s a lot easier to compare two different possible lives than it is to compare a possible life to that life not coming into existence, but by saying “kill anyone whose utility is less than zero” you’re defining zero utility as the utility of a dead person.
Also,
does not make sense to me. Utility is relative, yes, but it’s relative to states of the universe, not to other people. If average utility is currently zero, and then, let’s say, I recover from an illness than has been causing me distress, then my personal utility has increased, and average utility is no longer zero. Other people don’t magically lose utility when I happen to gain some. Total utility doesn’t renormalize in the way you seem to think it does.
Repugnant conclusion certainly is worth discussing, but the other two:
I think it would be a very bad idea to have a utility function such that the utility of an empty universe is higher than the utility of a populated non-dystopia; so any utility function for the universe that I might approve should have a pretty hefty negative value for empty universes. I don’t think that’s too awful of a requirement.
This looks like a total non sequitur to me. What do you mean?
He means that if utility is measured in such a way that average utility is always zero, then total utility is always zero too, average utility being total utility divided number of agents.
Well, that’s not a very good utility function then, and taw’s three possibilities are nowhere near exhausting the range of possibilities.
So where do you put zero? By this one completely arbitrary decision you can collapse total utility maximization to one of these cases.
It gets far worse when you try to apply it to animals.
As for zero being very high, I’ve actually heard many times this argument about existence of farm animals, which supposedly suffer so much that it would be better if they didn’t exist. It can as easily be applied to wild animals, even though it’s far less common to do so.
With animal zero very low, total utility maximization turns us into paperclip maximizer of insects, or whatever is the simplest utility-positive life.
If non-existent beings have exactly zero utility—that any being with less than zero utility ought not to have come into existence—then the choice of where to put zero is clearly not arbitrary.
Not really, but moral philosophers already have, at length.
Yeah, that doesn’t surprise me. But the context of our discussion was certainly different! I had suggested to AngryParsley that even if we had next to no understanding of how to modify our minds for the better, uploading would still be useful since we could make 10 copies of ourselves with semi-random changes, and let only the best one propagate; he objected that how did I plan to get rid of the excess 9? By murder was plainly awfully immoral, and my suggestion of forcing them to live out the standard 4 score and 10 only somewhat less so—by not allowing them to be immortal or whatever the selected copy would get, I would be lowering the average. (Going by totals, this of course isn’t an issue.)
The Mere Addition Paradox suffices to refute the AVG view. From Nick’s link:
For example, A+ could evolve from A by the choice of some parents to have children whose quality of life is good, though not as good as the average in A. We can even suppose that this makes the parents a little happier, while still lowering the overall average.
Thanks for the link.
And you are right, by Jove, these philosophers really like to go on about it—ie.the whole issue could be summarized as the question whether we should optimize for AVG(good) or for SUM(good) -- and some variations. A question that ultimately cannot be answered. The length of the bibliography makes it almost comical.
The main problem I have with AVG is that it implies that as population increases, inherent value of each individual decreases. Why should you be suddenly less important simply because someone else was just born? (I don’t mean the instrumental components of your value but your inherent value)
What you “should” do depends on what your goal is.
Most biological organisms don’t maximise either of your proposed functions—their utility function is down to how many great grandchildren they have—not how many people they help.
I don’t really see how what most organisms do is relevant; we’re discussing what’s moral/ethical for human beings. This is quite relevant to deciding whether to say, help out Africa (which with its very high birth rates is equivalent to plumping for total) or work on issues in the rest of the world (average).
As I understand it, there is widespread disagreement on that issue. Most humans don’t seem to have a clear idea of what their goals are, and of those that do, there is considerable disagreement about what those goals are.
Scientists can model human goals. The result seems to be that humans act so as to try to maximise their genes—and sometimes the memes they have been infected with. Basically all goal-directed behaviour is the result of some optimisation process—and in biology, that process usually involves differential reproductive success of replicators.
Human goal-seeking behaviour thus depends on the details of the memes the humans have been infected with—which mostly explains why humans vocalise a diverse range of goals.
Humans often spread the patterns they copy while working from hypotheses about how the world works that are long out of date. Also organisms often break and malfunction as a result of developmental problems and/or environmental stresses—so these theories are not always as good as we would like.