There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
I’m not sure what you mean by “valid” here—could you clarify?
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal.
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.