Hmmm. To avoid Omelas, equality would have to be fairly heavily weighted; any finite weighting given to equality, however, will simply mean that Omelas is only possible given a sufficiently large population (by balancing the cost of the inequality with the extra happiness of the extra inhabitants).
Well, if we’re really going to take Omelas seriously as our test case, then presumably we also have to look at how much that “extra happiness” (or whatever else we’re putting in the plus column) is reduced by those who walk away from it, and by those who are traumatized by it, and so forth. It might turn out that increasing the population doesn’t help.
But that’s just a quibble. I basically agree: once we swallow the assumption that for some reason we neither understand nor can ameliorate, the happiness of the many ineluctably depends on the misery of the few, then a total-utilitarian approach either says that equality is the most important factor in utility (which is a problem like you describe), or endorses the few being miserable.
That’s quite an assumption to swallow, though. I have no reason to believe it’s true of the world I live in.
A weaker version that might be true of the world I actually live in is that concentrating utility-generating resources in fewer hands results in higher total utility-from-all-sources-other-than-equality (Ua) but more total-disutility-from-inequality (Ub). But it’s not quite as clear that our (Ua, Ub) preferences are lexicographic.
Well, if we’re really going to take Omelas seriously as our test case, then presumably we also have to look at how much that “extra happiness” (or whatever else we’re putting in the plus column) is reduced by those who walk away from it, and by those who are traumatized by it, and so forth. It might turn out that increasing the population doesn’t help.
Doubling the population should double the happiness; double the trauma; double the people who walk away. The end result should be (assuming a high enough population that the Law of Large Numbers is a reasonable heuristic) about twice the utility.
A weaker version that might be true of the world I actually live in is that concentrating utility-generating resources in fewer hands results in higher total utility-from-all-sources-other-than-equality (Ua) but more total-disutility-from-inequality (Ub). But it’s not quite as clear that our (Ua, Ub) preferences are lexicographic.
Consider the case of farmland; larger farms produce more food-per-acre than smaller farms. (Why? Because larger farms attract commercial farmers with high-intensity farming techniques; and they can buy better farming equipment with their higher profits). Now, in the case of farmland, the optimal scenario is not equality; you don’t want everyone to have the same amount of farmland, you want those who are good at farming to have most of it. (For a rather dramatic example of this, see the Zimbabwe farm invasions).
On the other hand, consider the case of food itself. Here, equality is a lot more important; giving one man food for a hundred while ninety-nine men starve is clearly a failure case, as a lot of food ends up going rotten and ninety-nine people end up dead.
So the optimal (Ua, Ub) ordering depends on exactly what it is that is being ordered; there is no universally correct ordering.
You seem to be assuming a form of utility that is linear with happiness, with trauma, with food-per-acre, with starving people, etc. I agree with you that if we calculate utility this way, what you say follows. It’s not clear to me that we ought to calculate utility this way.
Hmmm. There are other ways to calculate utility, yes, and some of them are very likely better than linear. But all of them should at least be monotonically increasing with increased happiness, lower trauma, etc. There isn’t some point of global happiness where you can say that global happiness above this level is worse than global happiness at this level, if all else remains constant. The increase may be smaller for higher starting levels of happiness, but it should be an increase.
Such a system can either be bounded above by a maximum value, which it approaches asymptotically (such that no amount of global happiness alone can ever be worth, say, ten billion utilions, but it can approach arbitrarily close to that amount), or it can be unbounded (in which case enough global happiness can counter any finite amount of negative effects). A linear system would be unbounded, and my comments above can be trivially changed to fit with any unbounded system (but not necessarily with a bounded system).
It’s not clear to me whether it should be a bounded or an unbounded system..
all of them should at least be monotonically increasing with increased happiness, lower trauma, etc
OK, so we agree that doubling the population doesn’t provide twice the utility, but you’re now arguing that it at least increases the utility (at least, up to a possible upper bound which might or might not exist).
This depends on the assumption that the utility-increasing aspects of increased population increase with population faster than the utility-decreasing aspects of increased population do. Which they might not.
...you know, it’s only after I read this comment that I realised that you’re suggesting that the utility-decreasing aspects may not use the same function as the utility-decreasing aspects. That is, what I was doing was mathematically equivalent to first linearly combining the separate aspects, and only then feeding that single number to a monotonically increasing nonlinear function.
Now I feel somewhat silly.
But yes, now I see that you are right. There are possible ethical models (example: bounded asymptotic increase for positive utility, unbounded linear decrease for negative utility) wherein a larger Omelas could be worse than a smaller Omelas, above some critical maximum size. In fact, there are some functions wherein an Omelas of size X could have positive utility, while an Omelas of size Y (with Y>X) could have negative utility.
Well, if we’re really going to take Omelas seriously as our test case, then presumably we also have to look at how much that “extra happiness” (or whatever else we’re putting in the plus column) is reduced by those who walk away from it, and by those who are traumatized by it, and so forth. It might turn out that increasing the population doesn’t help.
But that’s just a quibble. I basically agree: once we swallow the assumption that for some reason we neither understand nor can ameliorate, the happiness of the many ineluctably depends on the misery of the few, then a total-utilitarian approach either says that equality is the most important factor in utility (which is a problem like you describe), or endorses the few being miserable.
That’s quite an assumption to swallow, though. I have no reason to believe it’s true of the world I live in.
A weaker version that might be true of the world I actually live in is that concentrating utility-generating resources in fewer hands results in higher total utility-from-all-sources-other-than-equality (Ua) but more total-disutility-from-inequality (Ub). But it’s not quite as clear that our (Ua, Ub) preferences are lexicographic.
Doubling the population should double the happiness; double the trauma; double the people who walk away. The end result should be (assuming a high enough population that the Law of Large Numbers is a reasonable heuristic) about twice the utility.
Consider the case of farmland; larger farms produce more food-per-acre than smaller farms. (Why? Because larger farms attract commercial farmers with high-intensity farming techniques; and they can buy better farming equipment with their higher profits). Now, in the case of farmland, the optimal scenario is not equality; you don’t want everyone to have the same amount of farmland, you want those who are good at farming to have most of it. (For a rather dramatic example of this, see the Zimbabwe farm invasions).
On the other hand, consider the case of food itself. Here, equality is a lot more important; giving one man food for a hundred while ninety-nine men starve is clearly a failure case, as a lot of food ends up going rotten and ninety-nine people end up dead.
So the optimal (Ua, Ub) ordering depends on exactly what it is that is being ordered; there is no universally correct ordering.
You seem to be assuming a form of utility that is linear with happiness, with trauma, with food-per-acre, with starving people, etc.
I agree with you that if we calculate utility this way, what you say follows.
It’s not clear to me that we ought to calculate utility this way.
Hmmm. There are other ways to calculate utility, yes, and some of them are very likely better than linear. But all of them should at least be monotonically increasing with increased happiness, lower trauma, etc. There isn’t some point of global happiness where you can say that global happiness above this level is worse than global happiness at this level, if all else remains constant. The increase may be smaller for higher starting levels of happiness, but it should be an increase.
Such a system can either be bounded above by a maximum value, which it approaches asymptotically (such that no amount of global happiness alone can ever be worth, say, ten billion utilions, but it can approach arbitrarily close to that amount), or it can be unbounded (in which case enough global happiness can counter any finite amount of negative effects). A linear system would be unbounded, and my comments above can be trivially changed to fit with any unbounded system (but not necessarily with a bounded system).
It’s not clear to me whether it should be a bounded or an unbounded system..
OK, so we agree that doubling the population doesn’t provide twice the utility, but you’re now arguing that it at least increases the utility (at least, up to a possible upper bound which might or might not exist).
This depends on the assumption that the utility-increasing aspects of increased population increase with population faster than the utility-decreasing aspects of increased population do. Which they might not.
...you know, it’s only after I read this comment that I realised that you’re suggesting that the utility-decreasing aspects may not use the same function as the utility-decreasing aspects. That is, what I was doing was mathematically equivalent to first linearly combining the separate aspects, and only then feeding that single number to a monotonically increasing nonlinear function.
Now I feel somewhat silly.
But yes, now I see that you are right. There are possible ethical models (example: bounded asymptotic increase for positive utility, unbounded linear decrease for negative utility) wherein a larger Omelas could be worse than a smaller Omelas, above some critical maximum size. In fact, there are some functions wherein an Omelas of size X could have positive utility, while an Omelas of size Y (with Y>X) could have negative utility.
Yup. Sorry I wasn’t clearer earlier; glad we’ve converged.