And, since I can’t let that stand without tangling myself up in Yudkowsky’s “Outlawing Anthropics” post, I’ll present my conclusion on that as well:
To recapitulate the scenario: Suppose 20 copies of me are created and go to sleep, and a fair coin is tossed. If heads, 18 go to green rooms and 2 go to red rooms; if tails, vice versa. Upon waking, each of the copies in green rooms will be asked “Give $1 to each copy in a green room, while taking $3 from each copy in a red room”? (All must agree or something sufficiently horrible happens.)
The correct answer is “no”. Because I have copies and I am interacting with them, it is not proper for me to infer from my green room that I live in heads-world with 90% probability. Rather, there is 100% certainty that at least 2 of me are living in a green room, and if I am one of them, then the odds are 50-50 whether I have 1 companion or 17. I must not change my answer if I value my 18 potential copies in red rooms.
However, suppose there were only one of me instead. There is still a coin flip, and there are still 20 rooms (18 green/red and 2 red/green, depending on the flip), but I am placed into one of the rooms at random. Now, I wake in a green room, and I am asked a slightly different question: “Would you bet the coin was heads? Win +$1, or lose -$3”. My answer is now “yes”: I am no longer interacting with copies, the expected utility is +$0.60, so I take the bet.
The stuff about Boltzmann brains is a false dilemma. There’s no point in valuing the Boltzmann brain scenario over any of the other “trapped in the Matrix” / “brain in a jar” scenarios, of which there is a limitless supply. See, for instance, this lecture from Lawrence Krauss -- the relevant bits are from 0:24:00 to 0:41:00 -- which gives a much simpler explanation for why the universe began with low entropy, and doesn’t tie itself into loops by supposing Boltzmann pocket universes embedded in a high-entropy background universe.
And, since I can’t let that stand without tangling myself up in Yudkowsky’s “Outlawing Anthropics” post, I’ll present my conclusion on that as well:
To recapitulate the scenario: Suppose 20 copies of me are created and go to sleep, and a fair coin is tossed. If heads, 18 go to green rooms and 2 go to red rooms; if tails, vice versa. Upon waking, each of the copies in green rooms will be asked “Give $1 to each copy in a green room, while taking $3 from each copy in a red room”? (All must agree or something sufficiently horrible happens.)
The correct answer is “no”. Because I have copies and I am interacting with them, it is not proper for me to infer from my green room that I live in heads-world with 90% probability. Rather, there is 100% certainty that at least 2 of me are living in a green room, and if I am one of them, then the odds are 50-50 whether I have 1 companion or 17. I must not change my answer if I value my 18 potential copies in red rooms.
However, suppose there were only one of me instead. There is still a coin flip, and there are still 20 rooms (18 green/red and 2 red/green, depending on the flip), but I am placed into one of the rooms at random. Now, I wake in a green room, and I am asked a slightly different question: “Would you bet the coin was heads? Win +$1, or lose -$3”. My answer is now “yes”: I am no longer interacting with copies, the expected utility is +$0.60, so I take the bet.
The stuff about Boltzmann brains is a false dilemma. There’s no point in valuing the Boltzmann brain scenario over any of the other “trapped in the Matrix” / “brain in a jar” scenarios, of which there is a limitless supply. See, for instance, this lecture from Lawrence Krauss -- the relevant bits are from 0:24:00 to 0:41:00 -- which gives a much simpler explanation for why the universe began with low entropy, and doesn’t tie itself into loops by supposing Boltzmann pocket universes embedded in a high-entropy background universe.