more importantly, hurt the interest of those who prefer to pretend that life is good and the world is just for their own selfish reasons.
Isn’t that sort of contradictory? If there are people who have selfish reasons to act like life is good in general, obviously their life at least must be good enough for them to be satisfied. That makes the whole thing subjective, unless you take a very naive total sum utility approach.
Not like any of us has a “destroy universe and end all suffering” button ready to press and just refuses to anyway.
Imagine a reverse Omelas in which there is one powerful king who is extremely happy and one billion people suffering horrific fates. The King’s happiness depends on their misery. As part of his oppression, he forbids any discussion about the poor quality of life to minimize suicides, as they harm his interests.
“That makes the whole thing subjective, unless you take a very naive total sum utility approach.”
Wouldn’t the same type of argument apply to a reverse Omelas? The sum utility approach isn’t naive; it’s the most sensible approach. Personally, when choosing between alternatives in which you have skin in the game and need to think strategically, that’s exactly the approach you would take.
I don’t like total sum utility because it’s vulnerable to lots of hacks—your “reverse Omelas” is essentially a utility monster scenario, and in fact is exactly vulnerable to this because if you make the powerful king happy enough it says that the situation is good and should not be changed.
But also, I think morals make more sense as a guide towards how should we strive to change the world we’re in—within the allowances of its own rules of self-consistency—than how to judge the world itself. We don’t know precisely why the world works the way it does. Maybe it really couldn’t work any other way. But even if there was a happier universe possible, none of us can just teleport themselves to it and exist in it, as its laws would probably be incompatible with our life. If there were no rules or limits, ethics would be easy: you could always achieve that everyone be happy all the time. It’s because there are rules and limits that asking questions like “should I do X or Y? Which is better?” make sense and are necessary. As things are, since a net positive life seems possible in this universe, we don’t really have a reason to think that such a thing can’t be simply made available to more people, and ideally to everyone.
It’s not a utility monster scenario. The king doesn’t receive more happiness than other beings per a unit of resources; he’s a normal human being, just like all the others. While utility sum allows utility monsters, which seems bad, your method of “if some of the people are happy, then it’s just subjective” allows a reverse Omelas, which seems worse. It reminds me a bit of deontologists who criticize utilitarianism while allowing much worse things if applied consistently. Regarding the second part, I’m not against rules or limits or even against suffering. I just think that a much better game is possible that respects more conscious beings. No more bullshit like kids that are born with cancer and just spend their life dying in misery, or sea turtles that come into existence only to be eaten by predators, and so on and so forth. Video games are a good example; they have rules and limitations and loss conditions, but they are engineered with the player in mind and for his benefit, while in life, conscious beings are not promised interesting or fair experiences and might be just randomly tortured.
Ok, sorry, I phrased that wrong—I know the scenario you described isn’t a utility monster one, but it can be turned into one simply by running up the knob of how much the king enjoys himself, all while being just as unfair, so it’s not really like total utility captures the thing you feel is actually wrong here, is my point. I actually did write something more on this (though in a humorous tone) in this post.
I don’t mean that “it’s subjective” fixes everything. I just mean that it’s also the reason why it’s not entirely right IMO to write off an entire universe based on total utility. Like, my intuition is that if we had a universe with net total negative utility it still wouldn’t be right to just snap our fingers with the Infinity Gauntlet and make it disappears if it wasn’t the case that every single sentient in it, individually, was genuinely miserable to the point of wanting to die but being unable to.
Regarding the second part, I’m not against rules or limits or even against suffering.
The reason why I bring up rules and limits is more to stress how much our morality—the same by which we judge the wrongness of the universe—is borne of that universe’s own internal logic. For example, if we see someone drowning, we think it’s right to help them because we know that drowning is a thing that can happen to you without your consent (and because we estimate that ultimately on expectation it’s more likely that you wish to live than to die). I don’t mean that we can’t judge the flaws of the universe, but that our moral instincts are probably a better guide to what it takes to improve this universe, from inside (since they were shaped by it) than to what it would take to create a better one from scratch.
Video games are a good example; they have rules and limitations and loss conditions, but they are engineered with the player in mind and for his benefit, while in life, conscious beings are not promised interesting or fair experiences and might be just randomly tortured.
True, but also, with the same power as a programmer over a game, you could as well engineer a game to be positively torturous to its players. Purposefully frustrating and unfair. As things are, I think our universe is just not intelligently designed—neither to make us happy nor to make us miserable. Absent intent, this is what indifference looks like; and I think the asymmetry (where even indifference seems to result more in suffering than pleasure) is just a natural statistical outcome of how enjoyable states are just rarer and more specific than neutral or painful ones.
Isn’t that sort of contradictory? If there are people who have selfish reasons to act like life is good in general, obviously their life at least must be good enough for them to be satisfied. That makes the whole thing subjective, unless you take a very naive total sum utility approach.
Not like any of us has a “destroy universe and end all suffering” button ready to press and just refuses to anyway.
I think.
Imagine a reverse Omelas in which there is one powerful king who is extremely happy and one billion people suffering horrific fates. The King’s happiness depends on their misery. As part of his oppression, he forbids any discussion about the poor quality of life to minimize suicides, as they harm his interests.
“That makes the whole thing subjective, unless you take a very naive total sum utility approach.”
Wouldn’t the same type of argument apply to a reverse Omelas? The sum utility approach isn’t naive; it’s the most sensible approach. Personally, when choosing between alternatives in which you have skin in the game and need to think strategically, that’s exactly the approach you would take.
I don’t like total sum utility because it’s vulnerable to lots of hacks—your “reverse Omelas” is essentially a utility monster scenario, and in fact is exactly vulnerable to this because if you make the powerful king happy enough it says that the situation is good and should not be changed.
But also, I think morals make more sense as a guide towards how should we strive to change the world we’re in—within the allowances of its own rules of self-consistency—than how to judge the world itself. We don’t know precisely why the world works the way it does. Maybe it really couldn’t work any other way. But even if there was a happier universe possible, none of us can just teleport themselves to it and exist in it, as its laws would probably be incompatible with our life. If there were no rules or limits, ethics would be easy: you could always achieve that everyone be happy all the time. It’s because there are rules and limits that asking questions like “should I do X or Y? Which is better?” make sense and are necessary. As things are, since a net positive life seems possible in this universe, we don’t really have a reason to think that such a thing can’t be simply made available to more people, and ideally to everyone.
It’s not a utility monster scenario. The king doesn’t receive more happiness than other beings per a unit of resources; he’s a normal human being, just like all the others. While utility sum allows utility monsters, which seems bad, your method of “if some of the people are happy, then it’s just subjective” allows a reverse Omelas, which seems worse. It reminds me a bit of deontologists who criticize utilitarianism while allowing much worse things if applied consistently.
Regarding the second part, I’m not against rules or limits or even against suffering. I just think that a much better game is possible that respects more conscious beings. No more bullshit like kids that are born with cancer and just spend their life dying in misery, or sea turtles that come into existence only to be eaten by predators, and so on and so forth.
Video games are a good example; they have rules and limitations and loss conditions, but they are engineered with the player in mind and for his benefit, while in life, conscious beings are not promised interesting or fair experiences and might be just randomly tortured.
Ok, sorry, I phrased that wrong—I know the scenario you described isn’t a utility monster one, but it can be turned into one simply by running up the knob of how much the king enjoys himself, all while being just as unfair, so it’s not really like total utility captures the thing you feel is actually wrong here, is my point. I actually did write something more on this (though in a humorous tone) in this post.
I don’t mean that “it’s subjective” fixes everything. I just mean that it’s also the reason why it’s not entirely right IMO to write off an entire universe based on total utility. Like, my intuition is that if we had a universe with net total negative utility it still wouldn’t be right to just snap our fingers with the Infinity Gauntlet and make it disappears if it wasn’t the case that every single sentient in it, individually, was genuinely miserable to the point of wanting to die but being unable to.
The reason why I bring up rules and limits is more to stress how much our morality—the same by which we judge the wrongness of the universe—is borne of that universe’s own internal logic. For example, if we see someone drowning, we think it’s right to help them because we know that drowning is a thing that can happen to you without your consent (and because we estimate that ultimately on expectation it’s more likely that you wish to live than to die). I don’t mean that we can’t judge the flaws of the universe, but that our moral instincts are probably a better guide to what it takes to improve this universe, from inside (since they were shaped by it) than to what it would take to create a better one from scratch.
True, but also, with the same power as a programmer over a game, you could as well engineer a game to be positively torturous to its players. Purposefully frustrating and unfair. As things are, I think our universe is just not intelligently designed—neither to make us happy nor to make us miserable. Absent intent, this is what indifference looks like; and I think the asymmetry (where even indifference seems to result more in suffering than pleasure) is just a natural statistical outcome of how enjoyable states are just rarer and more specific than neutral or painful ones.