(this way she gets $2 half the time instead of $1 half the time for heads).
That’s a good point—this line of reasoning works fine for the original Sleeping Beauty problem, and one can solve it without really worrying what Sleeping Beauty’s subjective probabilities are. That is indeed similar to UDT.
Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can’t add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
Depends on self-altruism and such concepts. No longer as clear cut. The question comes down to “do you prefer that your copies all get a dollar, or what”
If I need to specify the degree of “self-altruism,” suppose that sleeping beauty is not a human, but is instead a reinforcement-learning robot with no altruism module, self- or otherwise.
Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can’t add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
OK, if I’m interpreting this right, you mean to say that Sleeping Beauty is put to sleep, and then a coin is flipped. If it comes up tails, she is duplicated; if it comes up heads, nothing additional is done. Then, wake all copies of Sleeping Beauty up. What probability should any particular copy of Sleeping Beauty assign that the coin came up tails? If this is not the question you’re asking, please clarify for me. I know you mentioned betting but let’s just base this on log score and say the return is in utils so that there isn’t any ambiguity. Since you’re saying they don’t add utilities, I’m also going to assume you mean each copy of Sleeping Beauty only cares about herself, locally.
So, given all of that, I don’t see how the answer is anything but 1⁄2. The coin is already flipped, and fell according to the standard laws of physics. Being split or not doesn’t do anything to the coin. Since each copy only cares about herself locally, in fact, why would the answer change? You might as well not copy Sleeping Beauty at all in the tails world, because she doesn’t care about her copies. Her answer is still 1⁄2 (unless of course she knew the coin was weighted, etc.).
I mean, think about it this way. Suppose an event X was about to happen. You are put to sleep. If X happens, 10,000 copies of you are made and put into green rooms, and you are put into a red room. If X does not happen, 10,000 copies of you are made and put into red rooms, and you are put into a green room. Then all copies of you wake up. If I was 99.9% sure beforehand that X was going to happen and woke up in a red room, I’d be 99.9% sure that when I exited that room, I’d see 10,000 copies of me leaving green rooms. And if I woke up in a green room, I’d be 99.9% sure that when I exited that room, I’d see 9,999 copies of me leaving green rooms, and 1 copy of me leaving a red room. Copying me doesn’t go back in time and change what happened. This reminds me of the discussion on Ultimate Newcomb’s Problem, where IIRC some people thought you could change the prime-ness of a number by how you made a choice. That doesn’t work there, and it doesn’t work here, either.
From the outside though, there isn’t a right answer. But, of course, from the inside, yes there is a right answer. From the outside you could count observer moments in a different way and get a different answer, but IRL there’s only what actually happens. That’s what I was trying to get at.
Now I expect I may have misinterpreted your question? But at least tell me if you think I answered my own question correctly, if it wasn’t the same as yours.
Ok, so you don’t think that I can travel back in time to change the probability of a past event? How about this problem: I flip a coin, and if the coin is heads I put a white stone into a bag. But if the coin is tails, I flip a coin and put one white stone and one black stone into the bag.
You reach into a bag and pull out a stone. It is white. From this, you infer that you are twice as likely to be in heads-world than tails-world. Have you gone back in time and changed the coin?
No—you have not affected the coin at all. So how come you think the coin is more likely heads than tails? Because the coin has affected you.
The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.
The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.
Yep, that was pretty dumb. Thanks for being gentle with me.
However, I still don’t understand what’s wrong with my conclusion in your version of Sleeping Beauty. Upon waking, Sleeping Beauty (whichever copy of her) doesn’t observe anything (colored stones or otherwise) correlated with the result of the coin flip. So it seems she has to stick with her original probability of tails having been flipped, 1⁄2.
Next, out of curiosity, if you had participated in my red/green thought experiment in real life, how would you anticipate if you woke up in a red room (not how would you bet, because I think IRL you’d probably care about copies of you)? I just can’t even physically bring myself to imagine seeing 9,999 copies of me coming out of their respective rooms and telling me they saw red, too, when I had been so confident beforehand that this very situation would not happen. Are you anticipating in the same way as me here?
Finally, let’s pull out the anthropic version of your stones in a bag experiment. Let’s say someone flips an unbiased coin; if it comes up heads, you are knocked out and wake up in a white room, while if it comes up tails, you are knocked out, then copied, and one of you wakes up in a white room and the other wakes up in a black room. Let’s just say the person in each room (or in just the white room if that’s the only one involved) is asked to guess whether the coin came up heads or tails. Let’s also say, for whatever reason, the person has resolved to, if ey wakes up in the white room, guess heads. If ey wakes up in the black room, ey won’t be guessing, ey’ll just be right. Now, if we repeat this experiment multiple times, with different people, it will turn out that, looking at all of the different people (/copies) that actually did wake up in white rooms, it turns out that exactly half of them will have guessed right. Right now I’m just talking about watching this experiment many times from the outside. In fact, it doesn’t matter with what probability the person resolves to guess heads if ey wakes up in the white room—this result holds (that around half of the guesses from white rooms will be correct, in the long run).
Now, given all of that, here’s how I would reason, from the inside of this experiment, if we’re doing log scores in utils (if for some reason I didn’t care about copies of me, which IRL I would) for a probability of heads. Please tell me if you’d reason differently, and why:
In a black room, duh. So let’s say I wake up in a white room. I’d say, well, I only want to maximize my utility. The only way I can be sure to uniquely specify myself, now that I might have been copied, is to say that I am “notsonewuser-in-a-white-room”. Saying “notsonewuser” might not cut it anymore. Historically, when I’ve watched this experiment, “person-in-a-white-room” guesses the coin flip correctly half of the time, no matter what strategy ey has used. So I don’t think I can do better than to say 1⁄2. So I say 1⁄2 and get −1 util (as opposed to an expected −1.08496… utils which I’ve seen historically hold up when I look at all the people in white rooms who have said a 2⁄3 probability of heads).
Now I also need to explain why I think this differs from the obvious situation you brought up (obvious in that the answer was obvious, not in that it wasn’t a good point to make, I think it definitely was!). For one thing, looking historically at people who pick out white stones, they have been in heads-world 2⁄3 of the time. I don’t seem to have any other coherent answer for the difference, though, to be honest (and I’ve already spent hours thinking about this stuff today, and I’m tired). So my reduction’s not quite done, but given the points I’ve made here, I don’t think yours is, either. Maybe you can see flaws in my reasoning, though. Please let me know if you do.
EDIT: I think I figured out the difference. In the situation where you are simply reaching into a bag, the event “I pull out a white stone.” is well defined. In the situation in which you are cloned, the event “I wake up in a white room.” is only well-defined when it is interpreted as “Someone who subjectively experiences being me wakes up in a white room.”, and waking up in a black room is not evidence against the truth of this statement, whereas pulling out a black stone is pretty much absolute evidence that you did not pull out a white stone.
That’s a good point—this line of reasoning works fine for the original Sleeping Beauty problem, and one can solve it without really worrying what Sleeping Beauty’s subjective probabilities are. That is indeed similar to UDT.
Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can’t add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
Doesn’t mean there’s not a correct one.
Depends on self-altruism and such concepts. No longer as clear cut. The question comes down to “do you prefer that your copies all get a dollar, or what”
If I need to specify the degree of “self-altruism,” suppose that sleeping beauty is not a human, but is instead a reinforcement-learning robot with no altruism module, self- or otherwise.
OK, if I’m interpreting this right, you mean to say that Sleeping Beauty is put to sleep, and then a coin is flipped. If it comes up tails, she is duplicated; if it comes up heads, nothing additional is done. Then, wake all copies of Sleeping Beauty up. What probability should any particular copy of Sleeping Beauty assign that the coin came up tails? If this is not the question you’re asking, please clarify for me. I know you mentioned betting but let’s just base this on log score and say the return is in utils so that there isn’t any ambiguity. Since you’re saying they don’t add utilities, I’m also going to assume you mean each copy of Sleeping Beauty only cares about herself, locally.
So, given all of that, I don’t see how the answer is anything but 1⁄2. The coin is already flipped, and fell according to the standard laws of physics. Being split or not doesn’t do anything to the coin. Since each copy only cares about herself locally, in fact, why would the answer change? You might as well not copy Sleeping Beauty at all in the tails world, because she doesn’t care about her copies. Her answer is still 1⁄2 (unless of course she knew the coin was weighted, etc.).
I mean, think about it this way. Suppose an event X was about to happen. You are put to sleep. If X happens, 10,000 copies of you are made and put into green rooms, and you are put into a red room. If X does not happen, 10,000 copies of you are made and put into red rooms, and you are put into a green room. Then all copies of you wake up. If I was 99.9% sure beforehand that X was going to happen and woke up in a red room, I’d be 99.9% sure that when I exited that room, I’d see 10,000 copies of me leaving green rooms. And if I woke up in a green room, I’d be 99.9% sure that when I exited that room, I’d see 9,999 copies of me leaving green rooms, and 1 copy of me leaving a red room. Copying me doesn’t go back in time and change what happened. This reminds me of the discussion on Ultimate Newcomb’s Problem, where IIRC some people thought you could change the prime-ness of a number by how you made a choice. That doesn’t work there, and it doesn’t work here, either.
From the outside though, there isn’t a right answer. But, of course, from the inside, yes there is a right answer. From the outside you could count observer moments in a different way and get a different answer, but IRL there’s only what actually happens. That’s what I was trying to get at.
Now I expect I may have misinterpreted your question? But at least tell me if you think I answered my own question correctly, if it wasn’t the same as yours.
You answered the correct question. (yay)
Ok, so you don’t think that I can travel back in time to change the probability of a past event? How about this problem: I flip a coin, and if the coin is heads I put a white stone into a bag. But if the coin is tails, I flip a coin and put one white stone and one black stone into the bag.
You reach into a bag and pull out a stone. It is white. From this, you infer that you are twice as likely to be in heads-world than tails-world. Have you gone back in time and changed the coin?
No—you have not affected the coin at all. So how come you think the coin is more likely heads than tails? Because the coin has affected you.
The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.
Yep, that was pretty dumb. Thanks for being gentle with me.
However, I still don’t understand what’s wrong with my conclusion in your version of Sleeping Beauty. Upon waking, Sleeping Beauty (whichever copy of her) doesn’t observe anything (colored stones or otherwise) correlated with the result of the coin flip. So it seems she has to stick with her original probability of tails having been flipped, 1⁄2.
Next, out of curiosity, if you had participated in my red/green thought experiment in real life, how would you anticipate if you woke up in a red room (not how would you bet, because I think IRL you’d probably care about copies of you)? I just can’t even physically bring myself to imagine seeing 9,999 copies of me coming out of their respective rooms and telling me they saw red, too, when I had been so confident beforehand that this very situation would not happen. Are you anticipating in the same way as me here?
Finally, let’s pull out the anthropic version of your stones in a bag experiment. Let’s say someone flips an unbiased coin; if it comes up heads, you are knocked out and wake up in a white room, while if it comes up tails, you are knocked out, then copied, and one of you wakes up in a white room and the other wakes up in a black room. Let’s just say the person in each room (or in just the white room if that’s the only one involved) is asked to guess whether the coin came up heads or tails. Let’s also say, for whatever reason, the person has resolved to, if ey wakes up in the white room, guess heads. If ey wakes up in the black room, ey won’t be guessing, ey’ll just be right. Now, if we repeat this experiment multiple times, with different people, it will turn out that, looking at all of the different people (/copies) that actually did wake up in white rooms, it turns out that exactly half of them will have guessed right. Right now I’m just talking about watching this experiment many times from the outside. In fact, it doesn’t matter with what probability the person resolves to guess heads if ey wakes up in the white room—this result holds (that around half of the guesses from white rooms will be correct, in the long run).
Now, given all of that, here’s how I would reason, from the inside of this experiment, if we’re doing log scores in utils (if for some reason I didn’t care about copies of me, which IRL I would) for a probability of heads. Please tell me if you’d reason differently, and why:
In a black room, duh. So let’s say I wake up in a white room. I’d say, well, I only want to maximize my utility. The only way I can be sure to uniquely specify myself, now that I might have been copied, is to say that I am “notsonewuser-in-a-white-room”. Saying “notsonewuser” might not cut it anymore. Historically, when I’ve watched this experiment, “person-in-a-white-room” guesses the coin flip correctly half of the time, no matter what strategy ey has used. So I don’t think I can do better than to say 1⁄2. So I say 1⁄2 and get −1 util (as opposed to an expected −1.08496… utils which I’ve seen historically hold up when I look at all the people in white rooms who have said a 2⁄3 probability of heads).
Now I also need to explain why I think this differs from the obvious situation you brought up (obvious in that the answer was obvious, not in that it wasn’t a good point to make, I think it definitely was!). For one thing, looking historically at people who pick out white stones, they have been in heads-world 2⁄3 of the time. I don’t seem to have any other coherent answer for the difference, though, to be honest (and I’ve already spent hours thinking about this stuff today, and I’m tired). So my reduction’s not quite done, but given the points I’ve made here, I don’t think yours is, either. Maybe you can see flaws in my reasoning, though. Please let me know if you do.
EDIT: I think I figured out the difference. In the situation where you are simply reaching into a bag, the event “I pull out a white stone.” is well defined. In the situation in which you are cloned, the event “I wake up in a white room.” is only well-defined when it is interpreted as “Someone who subjectively experiences being me wakes up in a white room.”, and waking up in a black room is not evidence against the truth of this statement, whereas pulling out a black stone is pretty much absolute evidence that you did not pull out a white stone.