Yes, you’re right (as are the other replies making similar points). I tried hard once more to come up with an accurate analogy of the above problem that would be realizable in the real world, but it seems like it’s impossible to come up with anything that doesn’t involve implanting false memories.
After giving this some more thought, it seems to me that the problem with the copying scenario is that once we eliminate the assumption that each agent has a unique continuous existence, all human intuitions completely break down, and we can compute only mathematically precise problems formulated within strictly defined probability spaces. Trouble is, since we’ve breaking one of the fundamental human common sense assumptions, the results may or may not make any intuitive sense, and as soon as we step outside formal, rigorous math, we can only latch onto subjectively preferable intuitions, which may differ between people.
Yes, you’re right (as are the other replies making similar points). I tried hard once more to come up with an accurate analogy of the above problem that would be realizable in the real world, but it seems like it’s impossible to come up with anything that doesn’t involve implanting false memories.
After giving this some more thought, it seems to me that the problem with the copying scenario is that once we eliminate the assumption that each agent has a unique continuous existence, all human intuitions completely break down, and we can compute only mathematically precise problems formulated within strictly defined probability spaces. Trouble is, since we’ve breaking one of the fundamental human common sense assumptions, the results may or may not make any intuitive sense, and as soon as we step outside formal, rigorous math, we can only latch onto subjectively preferable intuitions, which may differ between people.