Retracted last comment because I realized I was misreading what you were saying.
Let me approach this from another direction:
You’re basically supposing that 1/N odds of being tortured is morally equivalent to 1/N odds of being tortured with an implicit guarantee that somebody is going to get tortured. I think it is consistent to regard that 1/N for some sufficiently large N odds of me being tortured is less important than 1/N people actually being tortured.
If you create a precise duplicate of the universe in a simulation, I don’t regard that we have gained anything; I consider that two instances of indistinguishable utility aren’t cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.
This may seem to be favoring “average” utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.
I’d be hard-pressed to argue about the “indistinguishability” part, though I can sketch where the argument would lay; because utility exists as a product of the mind, and duplicate minds are identical from an internal perspective, an additional indistinguishable mind doesn’t add anything. Of course, this argument may require buying into the anthropic perspective.
If you create a precise duplicate of the universe in a simulation, I don’t regard that we have gained anything; I consider that two instances of indistinguishable utility aren’t cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.
This may seem to be favoring “average” utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.
I’m basically assuming this reality-fluid stuff is legit for the purposes of this post. I included the most common argument in it’s favor (the probability argument) but I’m not setting out to defend it, I’m just exploring the consequences.
If you’re in a simulation right now, how would you feel about those running the machine simulating you? Do you grant them moral sanction to do whatever they like with you, because you’re less than them?
I mean, maybe you’re here as a representative of the people running the machine simulating me. I’m not sure I like where your train of thought is going, in that case.
I’m puzzled at to why they should matter less.
Because they are less.
Retracted last comment because I realized I was misreading what you were saying.
Let me approach this from another direction:
You’re basically supposing that 1/N odds of being tortured is morally equivalent to 1/N odds of being tortured with an implicit guarantee that somebody is going to get tortured. I think it is consistent to regard that 1/N for some sufficiently large N odds of me being tortured is less important than 1/N people actually being tortured.
If you create a precise duplicate of the universe in a simulation, I don’t regard that we have gained anything; I consider that two instances of indistinguishable utility aren’t cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.
This may seem to be favoring “average” utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.
I’d be hard-pressed to argue about the “indistinguishability” part, though I can sketch where the argument would lay; because utility exists as a product of the mind, and duplicate minds are identical from an internal perspective, an additional indistinguishable mind doesn’t add anything. Of course, this argument may require buying into the anthropic perspective.
I’m basically assuming this reality-fluid stuff is legit for the purposes of this post. I included the most common argument in it’s favor (the probability argument) but I’m not setting out to defend it, I’m just exploring the consequences.
Why?
If you’re in a simulation right now, how would you feel about those running the machine simulating you? Do you grant them moral sanction to do whatever they like with you, because you’re less than them?
I mean, maybe you’re here as a representative of the people running the machine simulating me. I’m not sure I like where your train of thought is going, in that case.
Honestly, I would have upvoted just for this bit.