You seem to be assuming a form of utility that is linear with happiness, with trauma, with food-per-acre, with starving people, etc. I agree with you that if we calculate utility this way, what you say follows. It’s not clear to me that we ought to calculate utility this way.
Hmmm. There are other ways to calculate utility, yes, and some of them are very likely better than linear. But all of them should at least be monotonically increasing with increased happiness, lower trauma, etc. There isn’t some point of global happiness where you can say that global happiness above this level is worse than global happiness at this level, if all else remains constant. The increase may be smaller for higher starting levels of happiness, but it should be an increase.
Such a system can either be bounded above by a maximum value, which it approaches asymptotically (such that no amount of global happiness alone can ever be worth, say, ten billion utilions, but it can approach arbitrarily close to that amount), or it can be unbounded (in which case enough global happiness can counter any finite amount of negative effects). A linear system would be unbounded, and my comments above can be trivially changed to fit with any unbounded system (but not necessarily with a bounded system).
It’s not clear to me whether it should be a bounded or an unbounded system..
all of them should at least be monotonically increasing with increased happiness, lower trauma, etc
OK, so we agree that doubling the population doesn’t provide twice the utility, but you’re now arguing that it at least increases the utility (at least, up to a possible upper bound which might or might not exist).
This depends on the assumption that the utility-increasing aspects of increased population increase with population faster than the utility-decreasing aspects of increased population do. Which they might not.
...you know, it’s only after I read this comment that I realised that you’re suggesting that the utility-decreasing aspects may not use the same function as the utility-decreasing aspects. That is, what I was doing was mathematically equivalent to first linearly combining the separate aspects, and only then feeding that single number to a monotonically increasing nonlinear function.
Now I feel somewhat silly.
But yes, now I see that you are right. There are possible ethical models (example: bounded asymptotic increase for positive utility, unbounded linear decrease for negative utility) wherein a larger Omelas could be worse than a smaller Omelas, above some critical maximum size. In fact, there are some functions wherein an Omelas of size X could have positive utility, while an Omelas of size Y (with Y>X) could have negative utility.
You seem to be assuming a form of utility that is linear with happiness, with trauma, with food-per-acre, with starving people, etc.
I agree with you that if we calculate utility this way, what you say follows.
It’s not clear to me that we ought to calculate utility this way.
Hmmm. There are other ways to calculate utility, yes, and some of them are very likely better than linear. But all of them should at least be monotonically increasing with increased happiness, lower trauma, etc. There isn’t some point of global happiness where you can say that global happiness above this level is worse than global happiness at this level, if all else remains constant. The increase may be smaller for higher starting levels of happiness, but it should be an increase.
Such a system can either be bounded above by a maximum value, which it approaches asymptotically (such that no amount of global happiness alone can ever be worth, say, ten billion utilions, but it can approach arbitrarily close to that amount), or it can be unbounded (in which case enough global happiness can counter any finite amount of negative effects). A linear system would be unbounded, and my comments above can be trivially changed to fit with any unbounded system (but not necessarily with a bounded system).
It’s not clear to me whether it should be a bounded or an unbounded system..
OK, so we agree that doubling the population doesn’t provide twice the utility, but you’re now arguing that it at least increases the utility (at least, up to a possible upper bound which might or might not exist).
This depends on the assumption that the utility-increasing aspects of increased population increase with population faster than the utility-decreasing aspects of increased population do. Which they might not.
...you know, it’s only after I read this comment that I realised that you’re suggesting that the utility-decreasing aspects may not use the same function as the utility-decreasing aspects. That is, what I was doing was mathematically equivalent to first linearly combining the separate aspects, and only then feeding that single number to a monotonically increasing nonlinear function.
Now I feel somewhat silly.
But yes, now I see that you are right. There are possible ethical models (example: bounded asymptotic increase for positive utility, unbounded linear decrease for negative utility) wherein a larger Omelas could be worse than a smaller Omelas, above some critical maximum size. In fact, there are some functions wherein an Omelas of size X could have positive utility, while an Omelas of size Y (with Y>X) could have negative utility.
Yup. Sorry I wasn’t clearer earlier; glad we’ve converged.