In the few minutes before I read your comment, I was thinking about reformulating this as an Omega-style problem. (I know, I know… I do try not to be too gratuitous with my use of Omega, but what can I say — omnipotence and omniscience are surprisingly useful for clarifying and simplifying reasoning/decision problems.) So Omega tells you she’s going to flip a fair coin, and if it lands on tails, she’s going to make a million copies of you and put all of them in identical rooms, and if it lands on heads, she’ll just put the one of you in such a room. She flips the coin, you blank out for a moment, and as expected, you’re in an unfamiliar room. In this case, it doesn’t appear that adding or subtracting copies of you should have anything to do with what you believe about the coin flip. You saw her flip the coin yourself, and you knew that you’d be seeing the same thing no matter what side came up. She could come back a few minutes later and say “Hey, if and only if it was tails, I just made another million copies of you and put them in rooms identical to this one, kbye” which clearly shouldn’t change your belief about the coin, but seems to be a situation identical to if she had just said “two million” in the first place.
Okay, I think I’m more confidently on the 1⁄2 side now.
OK, I think I have a definite reductio ad absurdum of your point. Suppose you wake up in a room, and the last thing you remember is Omega telling you: “I’m going to toss a coin now. Whatever comes up, I’ll put you in the room. However, if it’s tails, I’ll also put a million other people each in an identical room and manipulate their neural tissue so as to implant them a false memory of having been told all this before the toss. So, when you find yourself in the room, you won’t know if we’ve actually had this conversation, or you’ve been implanted the memory of it after the toss.”
After you find yourself in the room under this scenario, you have the memory of these exact words spoken to you by Omega a few seconds ago. Then he shows up and asks you about the expected value of the coin toss. I’m curious if your 1⁄2 intuition still holds in this situation? (I’m definitely unable to summon any such intuition at all—your brain states representing this memory are obviously more likely to have originated from their mass production in case of tails, just like finding a rare widget on the floor would be evidence for tails if Omega pledged to mass-manufacture them if tails come up.)
But if you wouldn’t say 1⁄2, then you’ve just reached an awful paradox. Instead of just implanting the memories, Omega can also choose to change these other million people in some other small way to make them slightly more similar to you. Or a bit more, or even more—and in the limit, he’d just use these people as the raw material for manufacturing the copies of you, getting us back to your copying scenario. At which step does the 1⁄2 intuition emerge?
(Of course, as I wrote in my other comment, all of this is just philosophizing that goes past the domain of validity of human intuitions, and these questions make sense only if tackled using rigorous math with more precisely defined assumptions and questions. But I do find it an interesting exploration of where our intuitions (mis)lead us.)
I’m curious if your 1⁄2 intuition still holds in this situation?
I’d still say 1⁄2 is the right answer, yes.
But I’m trying to avoid using intuition here; when I do, it tends to find the arguments on both sides equally persuasive (obvious, even). If there is a right answer at all, then this is truly a case where we have no choice but to shut up and do the math.
Suppose you’re a member of a large exploratory team on an alien planet colonized by humans. As a part of the standard equipment, each team member has an intelligent reconnaissance drone that can be released to roam around and explore. You get separated from the rest of your team and find yourself alone in the wilderness. You send out your drone to explore the area, and after a few hours it comes back. When you examine its records, you find the following.
Apparently, a local super-smart creature with a weird sense of humor—let’s call it Omega—has captured several drones and released (some of?) them back after playing with them a bit. Examining your drone’s records, you find that Omega has done something similar to the above described false memory game with them. You play the drone’s audio record, and you hear Omega saying: “I’ll toss a coin now. Afterwards, I’ll release your drone back in any case. If heads come up, I’ll destroy the other ten drones I have captured. If it’s tails, I’ll release them all back to their respective owners, but I’ll also insert this message into their audio records.” Assume that you’ve already heard a lot about Omega, since he’s already done many such strange experiments on the local folks—and from what’s known about his behavior, it’s overwhelmingly likely that the message can be taken at face value.
What would you say about the expected coin toss result now? Would you take the fact that you got your drone back as evidence in favor of tails, or does your 1⁄2 intuition still hold? If not, what’s the difference relative to the false memory case above? (Unless I’m missing something, the combined memories of yourself and the drone should be exactly equivalent to the false memory scenario.)
How about the following scenario? Say instead of Omega, it’s just a company doing a weird promotional scheme. They announce that they’ll secretly flip a coin in their headquarters, and if it’s tails, they’ll hand out prizes to a million random people from the phone directory tomorrow, whereas if it’s heads, they’ll award the same prize to only one lucky winner. The next day, you receive a phone call from them. Would you apply analogous reasoning in this case (and how, or why not)?
I think that’s very different… in the original scenario, heads and tails both result in you experiencing the same thing. In this case, if it comes up heads, it is a million times more likely that you will receive the prize, so getting a phone call from them is very significant Bayesian evidence.
Yes, you’re right (as are the other replies making similar points). I tried hard once more to come up with an accurate analogy of the above problem that would be realizable in the real world, but it seems like it’s impossible to come up with anything that doesn’t involve implanting false memories.
After giving this some more thought, it seems to me that the problem with the copying scenario is that once we eliminate the assumption that each agent has a unique continuous existence, all human intuitions completely break down, and we can compute only mathematically precise problems formulated within strictly defined probability spaces. Trouble is, since we’ve breaking one of the fundamental human common sense assumptions, the results may or may not make any intuitive sense, and as soon as we step outside formal, rigorous math, we can only latch onto subjectively preferable intuitions, which may differ between people.
In the situation you state, then yes, of course I place high probability on the coin having come up tails. However, in order for your situation to be truly analogous to the Sleeping Beauty problem, you would have to be guaranteed to get the phone call either way, which destroys any information you gain in your version.
...yeah, I think you’re right.
In the few minutes before I read your comment, I was thinking about reformulating this as an Omega-style problem. (I know, I know… I do try not to be too gratuitous with my use of Omega, but what can I say — omnipotence and omniscience are surprisingly useful for clarifying and simplifying reasoning/decision problems.) So Omega tells you she’s going to flip a fair coin, and if it lands on tails, she’s going to make a million copies of you and put all of them in identical rooms, and if it lands on heads, she’ll just put the one of you in such a room. She flips the coin, you blank out for a moment, and as expected, you’re in an unfamiliar room. In this case, it doesn’t appear that adding or subtracting copies of you should have anything to do with what you believe about the coin flip. You saw her flip the coin yourself, and you knew that you’d be seeing the same thing no matter what side came up. She could come back a few minutes later and say “Hey, if and only if it was tails, I just made another million copies of you and put them in rooms identical to this one, kbye” which clearly shouldn’t change your belief about the coin, but seems to be a situation identical to if she had just said “two million” in the first place.
Okay, I think I’m more confidently on the 1⁄2 side now.
OK, I think I have a definite reductio ad absurdum of your point. Suppose you wake up in a room, and the last thing you remember is Omega telling you: “I’m going to toss a coin now. Whatever comes up, I’ll put you in the room. However, if it’s tails, I’ll also put a million other people each in an identical room and manipulate their neural tissue so as to implant them a false memory of having been told all this before the toss. So, when you find yourself in the room, you won’t know if we’ve actually had this conversation, or you’ve been implanted the memory of it after the toss.”
After you find yourself in the room under this scenario, you have the memory of these exact words spoken to you by Omega a few seconds ago. Then he shows up and asks you about the expected value of the coin toss. I’m curious if your 1⁄2 intuition still holds in this situation? (I’m definitely unable to summon any such intuition at all—your brain states representing this memory are obviously more likely to have originated from their mass production in case of tails, just like finding a rare widget on the floor would be evidence for tails if Omega pledged to mass-manufacture them if tails come up.)
But if you wouldn’t say 1⁄2, then you’ve just reached an awful paradox. Instead of just implanting the memories, Omega can also choose to change these other million people in some other small way to make them slightly more similar to you. Or a bit more, or even more—and in the limit, he’d just use these people as the raw material for manufacturing the copies of you, getting us back to your copying scenario. At which step does the 1⁄2 intuition emerge?
(Of course, as I wrote in my other comment, all of this is just philosophizing that goes past the domain of validity of human intuitions, and these questions make sense only if tackled using rigorous math with more precisely defined assumptions and questions. But I do find it an interesting exploration of where our intuitions (mis)lead us.)
I’d still say 1⁄2 is the right answer, yes.
But I’m trying to avoid using intuition here; when I do, it tends to find the arguments on both sides equally persuasive (obvious, even). If there is a right answer at all, then this is truly a case where we have no choice but to shut up and do the math.
Hm.. let’s try pushing it a bit further.
Suppose you’re a member of a large exploratory team on an alien planet colonized by humans. As a part of the standard equipment, each team member has an intelligent reconnaissance drone that can be released to roam around and explore. You get separated from the rest of your team and find yourself alone in the wilderness. You send out your drone to explore the area, and after a few hours it comes back. When you examine its records, you find the following.
Apparently, a local super-smart creature with a weird sense of humor—let’s call it Omega—has captured several drones and released (some of?) them back after playing with them a bit. Examining your drone’s records, you find that Omega has done something similar to the above described false memory game with them. You play the drone’s audio record, and you hear Omega saying: “I’ll toss a coin now. Afterwards, I’ll release your drone back in any case. If heads come up, I’ll destroy the other ten drones I have captured. If it’s tails, I’ll release them all back to their respective owners, but I’ll also insert this message into their audio records.” Assume that you’ve already heard a lot about Omega, since he’s already done many such strange experiments on the local folks—and from what’s known about his behavior, it’s overwhelmingly likely that the message can be taken at face value.
What would you say about the expected coin toss result now? Would you take the fact that you got your drone back as evidence in favor of tails, or does your 1⁄2 intuition still hold? If not, what’s the difference relative to the false memory case above? (Unless I’m missing something, the combined memories of yourself and the drone should be exactly equivalent to the false memory scenario.)
How about the following scenario? Say instead of Omega, it’s just a company doing a weird promotional scheme. They announce that they’ll secretly flip a coin in their headquarters, and if it’s tails, they’ll hand out prizes to a million random people from the phone directory tomorrow, whereas if it’s heads, they’ll award the same prize to only one lucky winner. The next day, you receive a phone call from them. Would you apply analogous reasoning in this case (and how, or why not)?
I think that’s very different… in the original scenario, heads and tails both result in you experiencing the same thing. In this case, if it comes up heads, it is a million times more likely that you will receive the prize, so getting a phone call from them is very significant Bayesian evidence.
Yes, you’re right (as are the other replies making similar points). I tried hard once more to come up with an accurate analogy of the above problem that would be realizable in the real world, but it seems like it’s impossible to come up with anything that doesn’t involve implanting false memories.
After giving this some more thought, it seems to me that the problem with the copying scenario is that once we eliminate the assumption that each agent has a unique continuous existence, all human intuitions completely break down, and we can compute only mathematically precise problems formulated within strictly defined probability spaces. Trouble is, since we’ve breaking one of the fundamental human common sense assumptions, the results may or may not make any intuitive sense, and as soon as we step outside formal, rigorous math, we can only latch onto subjectively preferable intuitions, which may differ between people.
In the situation you state, then yes, of course I place high probability on the coin having come up tails. However, in order for your situation to be truly analogous to the Sleeping Beauty problem, you would have to be guaranteed to get the phone call either way, which destroys any information you gain in your version.
The probability for the head is still the same.
On the additional information, that you got the call, it becomes more likely that it was the head this time.