Bite the bullet and select 2. There doesn’t really seem to be anything inherently wrong with that, while 3 seems ad hoc and thus bad.
You seem to underestimate the difference between two human minds, let alone other minds. Additional benefit from method 2 is that it explains why human suffering is more “wrong” than to some ants suffering. This I guess is intuitive way most here think.
3 seems approximately how we deal with people in reality—we say things like “A is so like B” or “A is so unlike B” without thinking that A and B are any less seperate individuals with distinct rights, legal and moral statuses. It’s only when A and B get much too close in their reactions, that we flip into a different mode and wonder whether they are trully seperate.
Since this is the moral intuition, I see no compelling reason to discard it. It doesn’t seem to contradict any major results, I haven’t yet seen thought experiments where it becomes ridiculous, and it doesn’t over-burden our decision process.
If any of my statements there turn out to be wrong, then I would consider embrasing 2).
Actually, I updated thanks to reading this paper by Bostrom, so I gotta rephrase stuff.
First, two identical people living in separate simulations are just as much separate people as any other separate people are separate people. It doesn’t matter if there exist an identical replica somewhere else, the value of this person in particular doesn’t decrease tinyest bit. They’re distinct but identical.
Second, uniqueness of their identity decreases as there are more people like them, as you described with option #2. However, this property is not too interesting, since it has nothing to do with their personal experiences.
So what we get is this: Simulations together hold 100x our experiences, and we’re either offered a deal that kills 99x of us, or the one that 99% certainly kills us. Both have same expected utility. On both cases, we cease to be with 99% certainty. Both have the same negative expected utility.
But the interesting thing happens when we try to ensure continued flow of us existing, since by some magic, it seems to favor A to B, when both seem to be perfectly equal. I’m kinda feeling that problematic nature of handling divergence comes from irrational nature of this tendency to value continued flow of existence. But dunno.
First, two identical people living in separate simulations are just as much separate people as any other separate people are separate people. It doesn’t matter if there exist an identical replica somewhere else, the value of this person in particular doesn’t decrease tinyest bit. They’re distinct but identical.
The problem with this is that preference is for choosing actions, and actions can’t be about specific people only, they are about the whole reality. The question of how much you value a person only makes sense in the context of a specific way of combining valuations of individual people into valuations of reality.
I’ve read the paper, and disagree with it (one flippant way of phrasing my disagreement is to enquire as to whether reflections in mirrors have identical moral status). See the beggining of my first post for a better objection.
Bite the bullet and select 2. There doesn’t really seem to be anything inherently wrong with that, while 3 seems ad hoc and thus bad.
You seem to underestimate the difference between two human minds, let alone other minds. Additional benefit from method 2 is that it explains why human suffering is more “wrong” than to some ants suffering. This I guess is intuitive way most here think.
Edit: Fixed a lot of typos
3 seems approximately how we deal with people in reality—we say things like “A is so like B” or “A is so unlike B” without thinking that A and B are any less seperate individuals with distinct rights, legal and moral statuses. It’s only when A and B get much too close in their reactions, that we flip into a different mode and wonder whether they are trully seperate.
Since this is the moral intuition, I see no compelling reason to discard it. It doesn’t seem to contradict any major results, I haven’t yet seen thought experiments where it becomes ridiculous, and it doesn’t over-burden our decision process.
If any of my statements there turn out to be wrong, then I would consider embrasing 2).
Actually, I updated thanks to reading this paper by Bostrom, so I gotta rephrase stuff.
First, two identical people living in separate simulations are just as much separate people as any other separate people are separate people. It doesn’t matter if there exist an identical replica somewhere else, the value of this person in particular doesn’t decrease tinyest bit. They’re distinct but identical.
Second, uniqueness of their identity decreases as there are more people like them, as you described with option #2. However, this property is not too interesting, since it has nothing to do with their personal experiences.
So what we get is this: Simulations together hold 100x our experiences, and we’re either offered a deal that kills 99x of us, or the one that 99% certainly kills us. Both have same expected utility. On both cases, we cease to be with 99% certainty. Both have the same negative expected utility.
But the interesting thing happens when we try to ensure continued flow of us existing, since by some magic, it seems to favor A to B, when both seem to be perfectly equal. I’m kinda feeling that problematic nature of handling divergence comes from irrational nature of this tendency to value continued flow of existence. But dunno.
The problem with this is that preference is for choosing actions, and actions can’t be about specific people only, they are about the whole reality. The question of how much you value a person only makes sense in the context of a specific way of combining valuations of individual people into valuations of reality.
I’ve read the paper, and disagree with it (one flippant way of phrasing my disagreement is to enquire as to whether reflections in mirrors have identical moral status). See the beggining of my first post for a better objection.