Cool. I don’t really believe average happiness either (but I’m a lot closer to it than valuing total happiness). I wouldn’t steal from the poor to give to the rich, even if the rich are more effective at using resources.
Cool. I don’t really believe average happiness either (but I’m a lot closer to it than valuing total happiness).
I think that saying that “I value improving the lives of those who already exist,” is a good way to articulate your desire to increase average utility, but also spell out the fact that you find it bad to increase it by other means, like killing unhappy people.
It also articulates the fact that you would (I assume) be opposed to creating a person who is tortured 23 hours a day in a world filled completely with people being tortured 24 hours a day, even though that would increase average utility.
I also assume that while you believe in something like average utility, you don’t think that a universe with only one person with a utility of 100 is just as morally good as a universe with a trillion people who each have a utility of 100. So you probably also value having more people to some extent, even if you value it incrementally much less than average utility (I refer to this value as “number of worthwhile lives”).
I wouldn’t steal from the poor to give to the rich, even if the rich are more effective at using resources.
It sounds like you must also value equality for its own sake, rather than as a side-effect of diminishing marginal utility. I think I am also coming around to this way of thinking. I don’t think equality is infinitely valuable, of course, it needs to be traded off against other values. But I do think that, for example, a world where people are enslaved to a utility monster is probably worse than one where they are free, even if that diminishes total aggregate utility.
In fact, I’m starting to wonder if total utility is a terminal value, or if increasing it is just a side effect of wanting to simultaneously increase average utility and the number of worthwhile lives.
(Apart from: I wouldn’t say that I was maximising others utilty. I’d say I was maximising their happiness, freedom, fulfilment, etc. A utility function is an abstract mathematical thing. We can prove that rational agents behave as if they were trying to maximise some utility function. Since I’m trying to be a rational agent I try and make sure my ideas are consistent with a utility function, and so I sometimes talk of “my utility function”.
But when I consider other people I don’t value their utility functions. I just directly value their happiness, freedom, fulfilment, and so on. I don’t value their utility functions because, One, they’re not rational and so they don’t have utility functions. Two, valuing each other’s utility would lead to difficult self-reference. But mostly Three, on introspection I really do just value their happiness, freedom, fulfilment, etc. and not their utility.
The sense in which they do have utility is that each contributes utility to me. But then there’s no such thing as “an individual’s utility” because (as we’ve seen) the utility other people give to me is a combined function of all of their happiness, freedom, fulfilment, and so on.)
Apart from: I wouldn’t say that I was maximising others utilty. I’d say I was maximising their happiness, freedom, fulfilment, etc
I think I understand. I tend to use the word “utility” to mean something like “the sum total of everything a person values.” Your use is probably more precise, and closer to the original meaning.
I also get very nervous of the idea of maximizing utility because I believe wholeheartedly that value is complex. So if we define utility too narrowly and then try to maximize it we might lose something important. So right now I try to “increase” or “improve” utility rather than maximize it.
Cool. I don’t really believe average happiness either (but I’m a lot closer to it than valuing total happiness). I wouldn’t steal from the poor to give to the rich, even if the rich are more effective at using resources.
I think that saying that “I value improving the lives of those who already exist,” is a good way to articulate your desire to increase average utility, but also spell out the fact that you find it bad to increase it by other means, like killing unhappy people.
It also articulates the fact that you would (I assume) be opposed to creating a person who is tortured 23 hours a day in a world filled completely with people being tortured 24 hours a day, even though that would increase average utility.
I also assume that while you believe in something like average utility, you don’t think that a universe with only one person with a utility of 100 is just as morally good as a universe with a trillion people who each have a utility of 100. So you probably also value having more people to some extent, even if you value it incrementally much less than average utility (I refer to this value as “number of worthwhile lives”).
It sounds like you must also value equality for its own sake, rather than as a side-effect of diminishing marginal utility. I think I am also coming around to this way of thinking. I don’t think equality is infinitely valuable, of course, it needs to be traded off against other values. But I do think that, for example, a world where people are enslaved to a utility monster is probably worse than one where they are free, even if that diminishes total aggregate utility.
In fact, I’m starting to wonder if total utility is a terminal value, or if increasing it is just a side effect of wanting to simultaneously increase average utility and the number of worthwhile lives.
Agreed on all counts.
(Apart from: I wouldn’t say that I was maximising others utilty. I’d say I was maximising their happiness, freedom, fulfilment, etc. A utility function is an abstract mathematical thing. We can prove that rational agents behave as if they were trying to maximise some utility function. Since I’m trying to be a rational agent I try and make sure my ideas are consistent with a utility function, and so I sometimes talk of “my utility function”.
But when I consider other people I don’t value their utility functions. I just directly value their happiness, freedom, fulfilment, and so on. I don’t value their utility functions because, One, they’re not rational and so they don’t have utility functions. Two, valuing each other’s utility would lead to difficult self-reference. But mostly Three, on introspection I really do just value their happiness, freedom, fulfilment, etc. and not their utility.
The sense in which they do have utility is that each contributes utility to me. But then there’s no such thing as “an individual’s utility” because (as we’ve seen) the utility other people give to me is a combined function of all of their happiness, freedom, fulfilment, and so on.)
I think I understand. I tend to use the word “utility” to mean something like “the sum total of everything a person values.” Your use is probably more precise, and closer to the original meaning.
I also get very nervous of the idea of maximizing utility because I believe wholeheartedly that value is complex. So if we define utility too narrowly and then try to maximize it we might lose something important. So right now I try to “increase” or “improve” utility rather than maximize it.