I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
I’m not sure what you mean by “valid” here—could you clarify?
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal.
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.