I’m not sure how people would reason if people were often duplicated. Lots of issues would need to be addressed once that is common. Are two duplicates that have had identical experiences actually different people (and count twice in moral calculations, for instance)? It seems just as reasonable to count duplicates separately only if they have had different experiences. And in most scenarios with ems or AIs, duplication capability would go with the ability to completely control the duplicate’s experiences, making it hard to see how the duplicate can have any justified knowledge of the external world.
But Sleeping Beauty is not a problem with such characteristics. As described, it is only mildly-fantastical, with perfect memory erasure (which needn’t actually be perfect, just very good) the only unusual feature. So it should be possible to solve it using the tools used for reasoning about ordinary situations, and one would expect the answer obtained that way to still be correct according to some hypothetical more general theory of inference that might be devised in the future, just as the answers to problems solved 200 years ago using Newtonian mechanics are still regarded as correct today, despite the subsequent developments of relativity and quantum mechanics.
The questions of whether two duplicates are actually different people and of whether they count twice in moral calculations are different questions, and would likely be answered differently. People often answer these questions differently in the real world: people are usually said to remain the same person over time, but I think if you ask whether it is better to improve the daily quality of life of someone who’s about to die tomorrow, or improve the daily quality of life of someone who will go on to live a long life by the same amount, I think most people would agree that the second one is better because the beneficiary will get more use out of it, despite each time-slice of the beneficiary benefiting just as much in either case. Anyway, I was specifically talking about the extent to which experiences being differentiated would influence the subjective probability beliefs of such people. If they find it useful to assign probabilities in ways that depend on differentiating copies of themselves, this is probably because the extent to which they care about future copies of themselves depends on how those copies are differentiated from each other, and I can’t see why they might decide that the existence of a future copy of them decreases the marginal value of an additional identical copy down to 0 while having no effect on the marginal value of an additional almost identical copy.
Sleeping Beauty may be less fantastical, but it is still fantastical enough that such problems did not influence the development of probability theory. As I said, even testing hypotheses that correlate with how likely you are to survive to see the result of the test are too fantastical to influence the development of probability theory, despite such things actually occurring in real life. My point was that people who see Sleeping Beauty-like problems as a normal part of everyday life would likely have a better perspective on the problem than we do, so it might be worth trying to think from their perspective. The fact that Sleeping Beauty-type problems being normal is more fantastical than a Sleeping Beauty-type problem happening once doesn’t change this.
“My point was that people who see Sleeping Beauty-like problems as a normal part of everyday life would likely have a better perspective on the problem than we do”
Yes, I agree.
“so it might be worth trying to think from their perspective.”
Yes, it might. But I think we shouldn’t expect to be very successful in this attempt. So if trying to do that gives a result that contradicts ordinary reasoning, which really ought to suffice for Sleeping Beauty, then we’re probably not thinking from their perspective very well.
I agree that it is difficult to see things from the perspective of people in such a world, but we should at least be able to think about whether certain hypotheses about how they’d think are plausible. That may still be difficult, but ordinary reasoning is not easy to do reliably in these cases either; if it was, then presumably there would be a consensus on how to address the Sleeping Beauty problem.
I’m not sure how people would reason if people were often duplicated. Lots of issues would need to be addressed once that is common. Are two duplicates that have had identical experiences actually different people (and count twice in moral calculations, for instance)? It seems just as reasonable to count duplicates separately only if they have had different experiences. And in most scenarios with ems or AIs, duplication capability would go with the ability to completely control the duplicate’s experiences, making it hard to see how the duplicate can have any justified knowledge of the external world.
But Sleeping Beauty is not a problem with such characteristics. As described, it is only mildly-fantastical, with perfect memory erasure (which needn’t actually be perfect, just very good) the only unusual feature. So it should be possible to solve it using the tools used for reasoning about ordinary situations, and one would expect the answer obtained that way to still be correct according to some hypothetical more general theory of inference that might be devised in the future, just as the answers to problems solved 200 years ago using Newtonian mechanics are still regarded as correct today, despite the subsequent developments of relativity and quantum mechanics.
The questions of whether two duplicates are actually different people and of whether they count twice in moral calculations are different questions, and would likely be answered differently. People often answer these questions differently in the real world: people are usually said to remain the same person over time, but I think if you ask whether it is better to improve the daily quality of life of someone who’s about to die tomorrow, or improve the daily quality of life of someone who will go on to live a long life by the same amount, I think most people would agree that the second one is better because the beneficiary will get more use out of it, despite each time-slice of the beneficiary benefiting just as much in either case. Anyway, I was specifically talking about the extent to which experiences being differentiated would influence the subjective probability beliefs of such people. If they find it useful to assign probabilities in ways that depend on differentiating copies of themselves, this is probably because the extent to which they care about future copies of themselves depends on how those copies are differentiated from each other, and I can’t see why they might decide that the existence of a future copy of them decreases the marginal value of an additional identical copy down to 0 while having no effect on the marginal value of an additional almost identical copy.
Sleeping Beauty may be less fantastical, but it is still fantastical enough that such problems did not influence the development of probability theory. As I said, even testing hypotheses that correlate with how likely you are to survive to see the result of the test are too fantastical to influence the development of probability theory, despite such things actually occurring in real life. My point was that people who see Sleeping Beauty-like problems as a normal part of everyday life would likely have a better perspective on the problem than we do, so it might be worth trying to think from their perspective. The fact that Sleeping Beauty-type problems being normal is more fantastical than a Sleeping Beauty-type problem happening once doesn’t change this.
“My point was that people who see Sleeping Beauty-like problems as a normal part of everyday life would likely have a better perspective on the problem than we do”
Yes, I agree.
“so it might be worth trying to think from their perspective.”
Yes, it might. But I think we shouldn’t expect to be very successful in this attempt. So if trying to do that gives a result that contradicts ordinary reasoning, which really ought to suffice for Sleeping Beauty, then we’re probably not thinking from their perspective very well.
I agree that it is difficult to see things from the perspective of people in such a world, but we should at least be able to think about whether certain hypotheses about how they’d think are plausible. That may still be difficult, but ordinary reasoning is not easy to do reliably in these cases either; if it was, then presumably there would be a consensus on how to address the Sleeping Beauty problem.