In this way I reach a contradiction between the belief that the number
of copies doesn’t matter, the belief that the existence of distant
parallel copies of myself doesn’t make much difference in what I should
do, and the idea that there is value in making people happy. Of these,
the most questionable seems to be the assumption that copies don’t matter,
so this line of reasoning turns me away from that belief.
I’ll have to track down what’s causing our disagreement here...
ETA: In Hal’s thought experiment, the copies are very far apart from each other (they had to arise by chance in an infinite universe), and our “identical copy immortality” intuition only applies to copies that are relatively near each other. So that explains the apparent disagreement.
Reading the post you linked to, it feels like some sort of fallacy is at work in the thought experiment as the results are tallied up.
Specifically: suppose we live in copies-matter world, and furthermore suppose we create a multiverse of 100 copies, 90 of which get the good outcome and 10 of which get the bad outcome (using the aforementioned biased quantum coin, which through sheer luck gives us an exact 90:10 split across 100 uncorrelated flips). Since copies matter, we can conclude it’s a moral good to post hoc shut down 9 of the 10 bad-outcome copies and replace those simulacra with 9 duplicates of existing good-outcome copies. While we’ve done a moral wrong by discontinuing 9 bad-outcome copies, we do a greater moral right by creating 9 new good-outcome copies, and thus we paperclip-maximize our way toward greater net utility.
Moreover, still living in copies-matter world, it’s a net win to shut down the final bad-outcome copy (i.e. “murder”, for lack of a better term, the last of the bad-outcome copies) and replace that final copy with one more good-outcome copy, thus guaranteeing that the outcome for all copies is good with 100% odds. Even supposing the delta between the good outcome and the bad outcome was merely one speck of dust in the eye, and furthermore supposing that the final bad-outcome copy was content with the bad outcome and would have preferred to continue existing.
At this point, the overall multiverse outcome is identical to the quantum coin having double heads, so we might as well have not involved quantum pocket change in the first place. Instead, knowing that one outcome was better than the other, we should have just forced the known-good outcome on all copies in the first place. With that, copies-matter world and copies-don’t-matter world are now reunified.
Returning to copies-don’t-matter world (and our intuition that that’s where we live), it feels like there’s an almost-but-not-quite-obvious analogy with Shannon entropy and/or Kolmogorov-Chaitin complexity lurking just under the surface.
Ruminating further, I think I’ve narrowed down the region where the fallacious step occurs.
Suppose there are 100 simulacra, and suppose for each simulacrum you flip a coin biased 9:1 in favor of heads. You choose one of two actions for each simulacrum, depending on whether the coin shows heads or tails, but the two actions have equal net utility for the simulacra so there are no moral conundra. Now, even though the combination of 90 heads and 10 tails is the most common, the permutations comprising it are nonetheless vastly outnumbered by all the remaining permutations. Suppose that after flipping 100 biased coins, the actual result is 85 heads and 15 tails.
What is the subjective probability? The coin flips are independent events, so the subjective probability of each coin flip must be 9:1 favoring heads. The fact that only 85 simulacra actually experienced heads is completely irrelevant.
Subjective probability arises from knowledge, so in practice none of the simulacra experience a subjective probability after a single coin flip. If the coin flip is repeated multiple times for all simulacra, then as each simulacrum experiences more coin flips while iterating through its state function, it will gradually converge on the objective probability of 90%. The first coin flip merely biases the experience of each simulacrum, determining the direction from which each will converge on the limit.
That said, take what I say with a grain of salt, because I seriously doubt this can be extended from the classical realm to cover quantum simulacra and the Born rule.
And, since I can’t let that stand without tangling myself up in Yudkowsky’s “Outlawing Anthropics” post, I’ll present my conclusion on that as well:
To recapitulate the scenario: Suppose 20 copies of me are created and go to sleep, and a fair coin is tossed. If heads, 18 go to green rooms and 2 go to red rooms; if tails, vice versa. Upon waking, each of the copies in green rooms will be asked “Give $1 to each copy in a green room, while taking $3 from each copy in a red room”? (All must agree or something sufficiently horrible happens.)
The correct answer is “no”. Because I have copies and I am interacting with them, it is not proper for me to infer from my green room that I live in heads-world with 90% probability. Rather, there is 100% certainty that at least 2 of me are living in a green room, and if I am one of them, then the odds are 50-50 whether I have 1 companion or 17. I must not change my answer if I value my 18 potential copies in red rooms.
However, suppose there were only one of me instead. There is still a coin flip, and there are still 20 rooms (18 green/red and 2 red/green, depending on the flip), but I am placed into one of the rooms at random. Now, I wake in a green room, and I am asked a slightly different question: “Would you bet the coin was heads? Win +$1, or lose -$3”. My answer is now “yes”: I am no longer interacting with copies, the expected utility is +$0.60, so I take the bet.
The stuff about Boltzmann brains is a false dilemma. There’s no point in valuing the Boltzmann brain scenario over any of the other “trapped in the Matrix” / “brain in a jar” scenarios, of which there is a limitless supply. See, for instance, this lecture from Lawrence Krauss -- the relevant bits are from 0:24:00 to 0:41:00 -- which gives a much simpler explanation for why the universe began with low entropy, and doesn’t tie itself into loops by supposing Boltzmann pocket universes embedded in a high-entropy background universe.
I found a relevant post by Hal Finney from a few years ago: http://groups.google.com/group/everything-list/browse_thread/thread/f8c480558da8c769
I’ll have to track down what’s causing our disagreement here...
ETA: In Hal’s thought experiment, the copies are very far apart from each other (they had to arise by chance in an infinite universe), and our “identical copy immortality” intuition only applies to copies that are relatively near each other. So that explains the apparent disagreement.
Reading the post you linked to, it feels like some sort of fallacy is at work in the thought experiment as the results are tallied up.
Specifically: suppose we live in copies-matter world, and furthermore suppose we create a multiverse of 100 copies, 90 of which get the good outcome and 10 of which get the bad outcome (using the aforementioned biased quantum coin, which through sheer luck gives us an exact 90:10 split across 100 uncorrelated flips). Since copies matter, we can conclude it’s a moral good to post hoc shut down 9 of the 10 bad-outcome copies and replace those simulacra with 9 duplicates of existing good-outcome copies. While we’ve done a moral wrong by discontinuing 9 bad-outcome copies, we do a greater moral right by creating 9 new good-outcome copies, and thus we paperclip-maximize our way toward greater net utility.
Moreover, still living in copies-matter world, it’s a net win to shut down the final bad-outcome copy (i.e. “murder”, for lack of a better term, the last of the bad-outcome copies) and replace that final copy with one more good-outcome copy, thus guaranteeing that the outcome for all copies is good with 100% odds. Even supposing the delta between the good outcome and the bad outcome was merely one speck of dust in the eye, and furthermore supposing that the final bad-outcome copy was content with the bad outcome and would have preferred to continue existing.
At this point, the overall multiverse outcome is identical to the quantum coin having double heads, so we might as well have not involved quantum pocket change in the first place. Instead, knowing that one outcome was better than the other, we should have just forced the known-good outcome on all copies in the first place. With that, copies-matter world and copies-don’t-matter world are now reunified.
Returning to copies-don’t-matter world (and our intuition that that’s where we live), it feels like there’s an almost-but-not-quite-obvious analogy with Shannon entropy and/or Kolmogorov-Chaitin complexity lurking just under the surface.
Ruminating further, I think I’ve narrowed down the region where the fallacious step occurs.
Suppose there are 100 simulacra, and suppose for each simulacrum you flip a coin biased 9:1 in favor of heads. You choose one of two actions for each simulacrum, depending on whether the coin shows heads or tails, but the two actions have equal net utility for the simulacra so there are no moral conundra. Now, even though the combination of 90 heads and 10 tails is the most common, the permutations comprising it are nonetheless vastly outnumbered by all the remaining permutations. Suppose that after flipping 100 biased coins, the actual result is 85 heads and 15 tails.
What is the subjective probability? The coin flips are independent events, so the subjective probability of each coin flip must be 9:1 favoring heads. The fact that only 85 simulacra actually experienced heads is completely irrelevant.
Subjective probability arises from knowledge, so in practice none of the simulacra experience a subjective probability after a single coin flip. If the coin flip is repeated multiple times for all simulacra, then as each simulacrum experiences more coin flips while iterating through its state function, it will gradually converge on the objective probability of 90%. The first coin flip merely biases the experience of each simulacrum, determining the direction from which each will converge on the limit.
That said, take what I say with a grain of salt, because I seriously doubt this can be extended from the classical realm to cover quantum simulacra and the Born rule.
And, since I can’t let that stand without tangling myself up in Yudkowsky’s “Outlawing Anthropics” post, I’ll present my conclusion on that as well:
To recapitulate the scenario: Suppose 20 copies of me are created and go to sleep, and a fair coin is tossed. If heads, 18 go to green rooms and 2 go to red rooms; if tails, vice versa. Upon waking, each of the copies in green rooms will be asked “Give $1 to each copy in a green room, while taking $3 from each copy in a red room”? (All must agree or something sufficiently horrible happens.)
The correct answer is “no”. Because I have copies and I am interacting with them, it is not proper for me to infer from my green room that I live in heads-world with 90% probability. Rather, there is 100% certainty that at least 2 of me are living in a green room, and if I am one of them, then the odds are 50-50 whether I have 1 companion or 17. I must not change my answer if I value my 18 potential copies in red rooms.
However, suppose there were only one of me instead. There is still a coin flip, and there are still 20 rooms (18 green/red and 2 red/green, depending on the flip), but I am placed into one of the rooms at random. Now, I wake in a green room, and I am asked a slightly different question: “Would you bet the coin was heads? Win +$1, or lose -$3”. My answer is now “yes”: I am no longer interacting with copies, the expected utility is +$0.60, so I take the bet.
The stuff about Boltzmann brains is a false dilemma. There’s no point in valuing the Boltzmann brain scenario over any of the other “trapped in the Matrix” / “brain in a jar” scenarios, of which there is a limitless supply. See, for instance, this lecture from Lawrence Krauss -- the relevant bits are from 0:24:00 to 0:41:00 -- which gives a much simpler explanation for why the universe began with low entropy, and doesn’t tie itself into loops by supposing Boltzmann pocket universes embedded in a high-entropy background universe.