I just came up with a funny argument for thirdism in the sleeping beauty problem.
Let’s say I’m sleeping beauty, right? The experimenter flips a coin, wakes me up once in case of heads or twice in case of tails, then tells me the truth about the coin and I go home.
What do I do when I get home? In case of tails, nothing. But in case of heads, I put on some scented candles, record a short message to myself on the answering machine, inject myself with an amnesia drug from my own supply, and go to sleep.
...The next morning, I wake up not knowing whether I’m still in the experiment or not. Then I play back the message on the answering machine and learn that the experiment is over, the coin came up heads, and I’m safely home. I’ve forgotten some information and then remembered it; a trivial operation.
But that massively simplifies the problem! Now I always wake up with amnesia twice, so the anthropic difference between heads and tails is gone. In case of heads, I find a message on my answering machine with probability 1⁄2, and in case of tails I don’t. So failing to find the message becomes ordinary Bayesian evidence in favor of tails. Therefore while I’m in the original experiment, I should update on failing to find the message and conclude that tails are 2⁄3 likely, so thirdism is right. Woohoo!
This argument is the same as Cian Dorr’s version with a weaker amnesia drug. In that experiment a weaker amnesia drug is used on beauty if Heads which only delays the recollection of memory for a few minutes, just like in your case the memory is delayed until the message is checked.
This argument was published in 2002. It is available before majority of the literature on the topic is published. Suffice to say it is not convincing to halfers. Even supporter like Terry Horgan admit the argument is suggestive and could run a serious risk of slippery slope.
Thank you for the reference! Indeed it’s very similar, the only difference is that my version relies on the beauty’s precommitment instead of the experimenter, but that probably doesn’t matter. Shame on me for not reading enough.
Nothing shameful on that. Similar arguments, which Jacob Ross categorized as “hypothetical priors” by adding another waking in case of H, have not been a main focus of discussion in literatures for the recent years. I would imagine most people haven’t read that.
In fact you should take it as a compliment. Some academic who probably spent a lot of time on it came up the same argument as you did.
I agree with Thomas—even if this proved that thirdism is right when you are planning to do this, it would not prove that it is right if you are not planning to do this. In fact it suggests the opposite: since the update is necessary, thirdism is false without the update.
The following principle seems plausible to me: creating any weird situation X outside the experiment shouldn’t affect my beliefs, if I can verify that I’m in the experiment and not in situation X. Disagreeing with that principle seems like a big bullet to bite, but maybe that’s just because I haven’t found any X that would lead to anything except thirdism (and I’ve tried). It’s certainly fair to scrutinize the idea because it’s new, and I’d love to learn about any strange implications.
“The next morning, I wake up not knowing whether I’m still in the experiment or not. ”
By creating a situation outside the experiment which is originally indistinct from being in the experiment, you affect how the experiment should be evaluated. The same thing is true, for example, if the whole experiment is done multiple times rather than only once.
Yeah, if the whole experiment is done twice, and you’re truthfully told “this is the first experiment” or “this is the second experiment” at the beginning of each day (a minute after waking up), then I think your reasoning in the first experiment (an hour after waking up) should be the same as though the second experiment didn’t exist. Having had a minute of confusion in your past should be irrelevant.
I disagree. I have presented arguments on LW in the past that if the experiment is run once in the history of the universe, you should reason as a halfer, but if the experiment is run many times, you will assign a probability in between 1⁄2 and 1⁄3, approaching one third as the number of times approaches infinity. I think that this applies even if you know the numerical identity of your particular run.
Actually, I was probably mistaken. I think I was thinking of this post and in particular this thread and this one. (I was previously using the username “Unknowns”.)
I think I confused this with Sleeping Beauty because of the similarity of Incubator situations with Sleeping Beauty. I’ll have to think about it but I suspect there will be similar results.
I just came up with a funny argument for thirdism in the sleeping beauty problem.
Let’s say I’m sleeping beauty, right? The experimenter flips a coin, wakes me up once in case of heads or twice in case of tails, then tells me the truth about the coin and I go home.
What do I do when I get home? In case of tails, nothing. But in case of heads, I put on some scented candles, record a short message to myself on the answering machine, inject myself with an amnesia drug from my own supply, and go to sleep.
...The next morning, I wake up not knowing whether I’m still in the experiment or not. Then I play back the message on the answering machine and learn that the experiment is over, the coin came up heads, and I’m safely home. I’ve forgotten some information and then remembered it; a trivial operation.
But that massively simplifies the problem! Now I always wake up with amnesia twice, so the anthropic difference between heads and tails is gone. In case of heads, I find a message on my answering machine with probability 1⁄2, and in case of tails I don’t. So failing to find the message becomes ordinary Bayesian evidence in favor of tails. Therefore while I’m in the original experiment, I should update on failing to find the message and conclude that tails are 2⁄3 likely, so thirdism is right. Woohoo!
You have changed the initial conditions. The initial conditions don’t speak about some external memory.
I’m not using any external memory during the experiment. Only later, at home. What I do at home is my business.
Then, it’s not the experiment’s business.
If you deny that indistinguishable states of knowledge can be created, the sleeping beauty problem is probably meaningless to you anyway.
There are (at least) two (meaningful) versions of the sleeping beauty problem. One is yours.
But they are two different problems.
This argument is the same as Cian Dorr’s version with a weaker amnesia drug. In that experiment a weaker amnesia drug is used on beauty if Heads which only delays the recollection of memory for a few minutes, just like in your case the memory is delayed until the message is checked.
This argument was published in 2002. It is available before majority of the literature on the topic is published. Suffice to say it is not convincing to halfers. Even supporter like Terry Horgan admit the argument is suggestive and could run a serious risk of slippery slope.
Thank you for the reference! Indeed it’s very similar, the only difference is that my version relies on the beauty’s precommitment instead of the experimenter, but that probably doesn’t matter. Shame on me for not reading enough.
Nothing shameful on that. Similar arguments, which Jacob Ross categorized as “hypothetical priors” by adding another waking in case of H, have not been a main focus of discussion in literatures for the recent years. I would imagine most people haven’t read that.
In fact you should take it as a compliment. Some academic who probably spent a lot of time on it came up the same argument as you did.
I agree with Thomas—even if this proved that thirdism is right when you are planning to do this, it would not prove that it is right if you are not planning to do this. In fact it suggests the opposite: since the update is necessary, thirdism is false without the update.
The following principle seems plausible to me: creating any weird situation X outside the experiment shouldn’t affect my beliefs, if I can verify that I’m in the experiment and not in situation X. Disagreeing with that principle seems like a big bullet to bite, but maybe that’s just because I haven’t found any X that would lead to anything except thirdism (and I’ve tried). It’s certainly fair to scrutinize the idea because it’s new, and I’d love to learn about any strange implications.
“The next morning, I wake up not knowing whether I’m still in the experiment or not. ”
By creating a situation outside the experiment which is originally indistinct from being in the experiment, you affect how the experiment should be evaluated. The same thing is true, for example, if the whole experiment is done multiple times rather than only once.
Yeah, if the whole experiment is done twice, and you’re truthfully told “this is the first experiment” or “this is the second experiment” at the beginning of each day (a minute after waking up), then I think your reasoning in the first experiment (an hour after waking up) should be the same as though the second experiment didn’t exist. Having had a minute of confusion in your past should be irrelevant.
I disagree. I have presented arguments on LW in the past that if the experiment is run once in the history of the universe, you should reason as a halfer, but if the experiment is run many times, you will assign a probability in between 1⁄2 and 1⁄3, approaching one third as the number of times approaches infinity. I think that this applies even if you know the numerical identity of your particular run.
Interesting! I was away from LW for a long time and probably missed it. Can you give a link, or sketch the argument here?
Actually, I was probably mistaken. I think I was thinking of this post and in particular this thread and this one. (I was previously using the username “Unknowns”.)
I think I confused this with Sleeping Beauty because of the similarity of Incubator situations with Sleeping Beauty. I’ll have to think about it but I suspect there will be similar results.