I think I see where you’re going with this, but your presentation actually leads somewhere else entirely; you discuss your point in the introduction and the conclusion, but you invest the majority of the weight of your argument into a place where your point is nowhere to be found.
Your argument, as presented, seems to be along these lines:
Suppose there is one person, and one person is tortured. That’s really important. Suppose there are a billion people, and one of them is tortured. That’s not very important.
What I think you’re getting at is more along these lines:
Suppose there is one person, and one person is tortured; that is extremely important to that person. Suppose there are a billion copies of that one person, and one of them is tortured. Even a slight benefit arising from a decision leading to that torture may outweigh, by virtue of the fact that the benefit has been reproduced a billion less one times, the badness of the torture.
In other words, your presentation conflates the number of people with the importance of something bad happening to one of them. You don’t discuss potential rewards at all; it’s just, this torture is happening. Torture is equally bad regardless of the percentage of the population that is being tortured (given a specific number of people that are being tortured, I mean); we shouldn’t care less about torture merely because there are more people who aren’t being tortured. Whereas your actual point, as hinted at, is that, for some group that gains utility per individual as a result of the decision that results in that torture, the relative badness of that decision is dependent on the size of the group.
Or, in other words, you seem to be aiming for a discussion of dustmotes in eyes compared to torture of one person, but you’re forgetting to actually discuss the dustmotes.
Your argument, as presented, seems to be along these lines: Suppose there is one person, and one person is tortured. That’s really important. Suppose there are a billion people, and one of them is tortured. That’s not very important.
No. My argument is as follows: suppose there is one person, duplicated a billion times. These copies are identical, they are the same person. Suppose one copy is deleted. This is equivalent to a-billion-to-one odds of killing all of them. Furthermore, this holds for torture. Assuming this argument holds (and I have yet to see a reason it shouldn’t) then the scenario at the bottom is a Good Thing.
However, if you consider it in terms of rewards adding up, then surely the trillions etc. copies of the society at the end receive enough utility to outweigh the disutility of the few copies getting tortured?
The point wasn’t that that was what you were trying to say, my point was that that was the most natural way to interpret what you were actually saying. Hence my comment at the end—you seem to be trying to raise a dustmotes in the eye versus torture argument, but you never actually discuss the dustmotes. Your comments invest a heavy amount of weight in the torture aspect of the argument, and remember that’s already an emotionally charged concept, and then you never discuss the utility that actually comes out of it. Allow me to elaborate:
Suppose, we vivisect an entire universe full of simulated people. If there are enough people it might not matter, the utility might outweigh the costs.
That’s what your thread is right now. The reader is left baffled as to what utility you could possible be referring to; are we referring to the utility some lunatic gets from knowing that there are people getting vivisected? And are we disregarding the disutility of the people getting vivisected? Why is their disutility lower because there are more people in the universe? Does the absolute important of a single person decrease relative to absolute number of people?
You don’t discuss the medical knowledge, or whatever utility everybody else is getting, from these vivisections.
Are you familiar with the thought experiment I’m referring to with dustmotes versus torture?
Suppose, we vivisect an entire universe full of simulated people. If there are enough people it might not matter, the utility might outweigh the costs.
That’s what your thread is right now. The reader is left baffled as to what utility you could possible be referring to; are we referring to the utility some lunatic gets from knowing that there are people getting vivisected? And are we disregarding the disutility of the people getting vivisected? Why is their disutility lower because there are more people in the universe? Does the absolute important of a single person decrease relative to absolute number of people?
You don’t discuss the medical knowledge, or whatever utility everybody else is getting, from these vivisections.
Ah.
I though I made that clear:
If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific—an individual’s private harem or torture chamber or hunting ground—then the people in this simulation are not real. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies—not quite p-zombies, since they are conscious, but e-zombies—reasoning, intelligent beings that can talk and scream and beg for mercy but do not matter.
I think I may have aid too much emphasis on the infinitesmally small Reality of the victims, as opposed to the Reality of the citizens.
Retracted last comment because I realized I was misreading what you were saying.
Let me approach this from another direction:
You’re basically supposing that 1/N odds of being tortured is morally equivalent to 1/N odds of being tortured with an implicit guarantee that somebody is going to get tortured. I think it is consistent to regard that 1/N for some sufficiently large N odds of me being tortured is less important than 1/N people actually being tortured.
If you create a precise duplicate of the universe in a simulation, I don’t regard that we have gained anything; I consider that two instances of indistinguishable utility aren’t cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.
This may seem to be favoring “average” utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.
I’d be hard-pressed to argue about the “indistinguishability” part, though I can sketch where the argument would lay; because utility exists as a product of the mind, and duplicate minds are identical from an internal perspective, an additional indistinguishable mind doesn’t add anything. Of course, this argument may require buying into the anthropic perspective.
If you create a precise duplicate of the universe in a simulation, I don’t regard that we have gained anything; I consider that two instances of indistinguishable utility aren’t cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.
This may seem to be favoring “average” utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.
I’m basically assuming this reality-fluid stuff is legit for the purposes of this post. I included the most common argument in it’s favor (the probability argument) but I’m not setting out to defend it, I’m just exploring the consequences.
If you’re in a simulation right now, how would you feel about those running the machine simulating you? Do you grant them moral sanction to do whatever they like with you, because you’re less than them?
I mean, maybe you’re here as a representative of the people running the machine simulating me. I’m not sure I like where your train of thought is going, in that case.
I think I see where you’re going with this, but your presentation actually leads somewhere else entirely; you discuss your point in the introduction and the conclusion, but you invest the majority of the weight of your argument into a place where your point is nowhere to be found.
Your argument, as presented, seems to be along these lines: Suppose there is one person, and one person is tortured. That’s really important. Suppose there are a billion people, and one of them is tortured. That’s not very important.
What I think you’re getting at is more along these lines: Suppose there is one person, and one person is tortured; that is extremely important to that person. Suppose there are a billion copies of that one person, and one of them is tortured. Even a slight benefit arising from a decision leading to that torture may outweigh, by virtue of the fact that the benefit has been reproduced a billion less one times, the badness of the torture.
In other words, your presentation conflates the number of people with the importance of something bad happening to one of them. You don’t discuss potential rewards at all; it’s just, this torture is happening. Torture is equally bad regardless of the percentage of the population that is being tortured (given a specific number of people that are being tortured, I mean); we shouldn’t care less about torture merely because there are more people who aren’t being tortured. Whereas your actual point, as hinted at, is that, for some group that gains utility per individual as a result of the decision that results in that torture, the relative badness of that decision is dependent on the size of the group.
Or, in other words, you seem to be aiming for a discussion of dustmotes in eyes compared to torture of one person, but you’re forgetting to actually discuss the dustmotes.
No. My argument is as follows: suppose there is one person, duplicated a billion times. These copies are identical, they are the same person. Suppose one copy is deleted. This is equivalent to a-billion-to-one odds of killing all of them. Furthermore, this holds for torture. Assuming this argument holds (and I have yet to see a reason it shouldn’t) then the scenario at the bottom is a Good Thing.
However, if you consider it in terms of rewards adding up, then surely the trillions etc. copies of the society at the end receive enough utility to outweigh the disutility of the few copies getting tortured?
Would you do me a favor and reread my comment?
The point wasn’t that that was what you were trying to say, my point was that that was the most natural way to interpret what you were actually saying. Hence my comment at the end—you seem to be trying to raise a dustmotes in the eye versus torture argument, but you never actually discuss the dustmotes. Your comments invest a heavy amount of weight in the torture aspect of the argument, and remember that’s already an emotionally charged concept, and then you never discuss the utility that actually comes out of it. Allow me to elaborate:
Suppose, we vivisect an entire universe full of simulated people. If there are enough people it might not matter, the utility might outweigh the costs.
That’s what your thread is right now. The reader is left baffled as to what utility you could possible be referring to; are we referring to the utility some lunatic gets from knowing that there are people getting vivisected? And are we disregarding the disutility of the people getting vivisected? Why is their disutility lower because there are more people in the universe? Does the absolute important of a single person decrease relative to absolute number of people?
You don’t discuss the medical knowledge, or whatever utility everybody else is getting, from these vivisections.
Are you familiar with the thought experiment I’m referring to with dustmotes versus torture?
Ah.
I though I made that clear:
I think I may have aid too much emphasis on the infinitesmally small Reality of the victims, as opposed to the Reality of the citizens.
I’m puzzled at to why they should matter less.
Because they are less.
Retracted last comment because I realized I was misreading what you were saying.
Let me approach this from another direction:
You’re basically supposing that 1/N odds of being tortured is morally equivalent to 1/N odds of being tortured with an implicit guarantee that somebody is going to get tortured. I think it is consistent to regard that 1/N for some sufficiently large N odds of me being tortured is less important than 1/N people actually being tortured.
If you create a precise duplicate of the universe in a simulation, I don’t regard that we have gained anything; I consider that two instances of indistinguishable utility aren’t cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.
This may seem to be favoring “average” utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.
I’d be hard-pressed to argue about the “indistinguishability” part, though I can sketch where the argument would lay; because utility exists as a product of the mind, and duplicate minds are identical from an internal perspective, an additional indistinguishable mind doesn’t add anything. Of course, this argument may require buying into the anthropic perspective.
I’m basically assuming this reality-fluid stuff is legit for the purposes of this post. I included the most common argument in it’s favor (the probability argument) but I’m not setting out to defend it, I’m just exploring the consequences.
Why?
If you’re in a simulation right now, how would you feel about those running the machine simulating you? Do you grant them moral sanction to do whatever they like with you, because you’re less than them?
I mean, maybe you’re here as a representative of the people running the machine simulating me. I’m not sure I like where your train of thought is going, in that case.
Honestly, I would have upvoted just for this bit.