As Jaynes (I recommend that link to you) says, not assigning prior probabilities doesn’t mean you don’t have prior probabilities—it just means you have to sneak them in without much examination. In practice, “not having prior probabilities” usually means assigning everything equal prior probability. But it leaves open a trap where people accidentally sneak in whatever prior probabilities they want—I think you fall into this on the Sleeping Beauty problem.
Hmm. Maybe. Can you go into a bit more detail? I’m not seeing it. AFAICT, I’m refusing to assign probability to a meaningless question, and whatever probability I might have assigned to that question has no consequence when you cash out that question to actual meaningful decisions.
the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.
I interpret “the presumptuous philosopher is an idiot” as a claim that that the posterior probabilities of the two theories aren’t affected by the number of people produced. Because you exist in each theory, and so you don’t have to update the probability, the conclusion is really a statement about the prior probability you’ve snuck in. This prior probability assigns an equal weight to different possible states of the world, no matter how many people they produce.
But then, in the Sleeping Beauty problem, you use a different unspecified prior, where each person produced gets an equal weight, even if this means giving different weights to different states of the world.
The answer to both of these questions ultimately depends on the prior. But your procedure doesn’t care about the prior—it leaves the user to sneak in whatever prior is their favorite. Thus, different users will sneak in different priors and get different answers.
The answer to both of these questions ultimately depends on the prior. But your procedure doesn’t care about the prior—it leaves the user to sneak in whatever prior is their favorite. Thus, different users will sneak in different priors and get different answers.
Yes, of course, but that’s fine; I’m not claiming any particular prior. What I am saying is that the prior is over possible worlds not observer moments, just as it is not over planets. I refuse to assign probabilities between observer moments, and assert that it is entirely unnecessary. If you can show me how I’m nonetheless assigning probability between observer moments by some underhanded scheme, or even where it matters what probabilities I sneak in, go ahead, but I’m still not seeing it.
But then, in the Sleeping Beauty problem, you use a different unspecified prior, where each person produced gets an equal weight, even if this means giving different weights to different states of the world.
I’m really confused. What question are you asking? If you’re asking what probability an outsider should assign to the coin coming up heads, the answer’s 1⁄2, if that outsider doesn’t have any information about the coin. nyan_sandwich implies this when ey says
(this way she gets $2 half the time instead of $1 half the time for heads).
If you’re asking what probability Sleeping Beauty should assign, that depends on what the consequences of making such an assignment is. nyan_sandwich makes this clear, too.
And, finally, if you’re asking for an authoritative “correct” subjective probability for Sleeping Beauty to have, I just don’t think that notion makes sense, as probability is in the mind. In fact in this case if you pushed me I’d say 1⁄2 because as soon as the coin is flipped, it lands, the position is recorded, and Sleeping Beauty waking up and falling asleep in the future can’t go back and change it. Though I’m not that sure that makes sense even here, and I know similar reasoning won’t make sense in more complicated cases. In the end it all comes down to how you count but I’m not sure we have any disagreement on what actually happens during the experiment.
I say (and I think nyan_sandwich would agree), “Don’t assign subjective probabilities in situations where it doesn’t make a difference.” This would be like asking if a tree that fell in a forest made a sound. If you count one way, you get one answer, and if you count another way, you get another. To actually be able to pay off a bet in this situation you need to decide how to count first—that is what differentiates making probability assignments here from other, “standard” situations.
I expect you disagree with something I’ve said here and I’d appreciate it if you flesh it out. I don’t necessarily expect to change your mind and I think it’s a distinct possibility you could change mine.
(this way she gets $2 half the time instead of $1 half the time for heads).
That’s a good point—this line of reasoning works fine for the original Sleeping Beauty problem, and one can solve it without really worrying what Sleeping Beauty’s subjective probabilities are. That is indeed similar to UDT.
Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can’t add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
Depends on self-altruism and such concepts. No longer as clear cut. The question comes down to “do you prefer that your copies all get a dollar, or what”
If I need to specify the degree of “self-altruism,” suppose that sleeping beauty is not a human, but is instead a reinforcement-learning robot with no altruism module, self- or otherwise.
Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can’t add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
OK, if I’m interpreting this right, you mean to say that Sleeping Beauty is put to sleep, and then a coin is flipped. If it comes up tails, she is duplicated; if it comes up heads, nothing additional is done. Then, wake all copies of Sleeping Beauty up. What probability should any particular copy of Sleeping Beauty assign that the coin came up tails? If this is not the question you’re asking, please clarify for me. I know you mentioned betting but let’s just base this on log score and say the return is in utils so that there isn’t any ambiguity. Since you’re saying they don’t add utilities, I’m also going to assume you mean each copy of Sleeping Beauty only cares about herself, locally.
So, given all of that, I don’t see how the answer is anything but 1⁄2. The coin is already flipped, and fell according to the standard laws of physics. Being split or not doesn’t do anything to the coin. Since each copy only cares about herself locally, in fact, why would the answer change? You might as well not copy Sleeping Beauty at all in the tails world, because she doesn’t care about her copies. Her answer is still 1⁄2 (unless of course she knew the coin was weighted, etc.).
I mean, think about it this way. Suppose an event X was about to happen. You are put to sleep. If X happens, 10,000 copies of you are made and put into green rooms, and you are put into a red room. If X does not happen, 10,000 copies of you are made and put into red rooms, and you are put into a green room. Then all copies of you wake up. If I was 99.9% sure beforehand that X was going to happen and woke up in a red room, I’d be 99.9% sure that when I exited that room, I’d see 10,000 copies of me leaving green rooms. And if I woke up in a green room, I’d be 99.9% sure that when I exited that room, I’d see 9,999 copies of me leaving green rooms, and 1 copy of me leaving a red room. Copying me doesn’t go back in time and change what happened. This reminds me of the discussion on Ultimate Newcomb’s Problem, where IIRC some people thought you could change the prime-ness of a number by how you made a choice. That doesn’t work there, and it doesn’t work here, either.
From the outside though, there isn’t a right answer. But, of course, from the inside, yes there is a right answer. From the outside you could count observer moments in a different way and get a different answer, but IRL there’s only what actually happens. That’s what I was trying to get at.
Now I expect I may have misinterpreted your question? But at least tell me if you think I answered my own question correctly, if it wasn’t the same as yours.
Ok, so you don’t think that I can travel back in time to change the probability of a past event? How about this problem: I flip a coin, and if the coin is heads I put a white stone into a bag. But if the coin is tails, I flip a coin and put one white stone and one black stone into the bag.
You reach into a bag and pull out a stone. It is white. From this, you infer that you are twice as likely to be in heads-world than tails-world. Have you gone back in time and changed the coin?
No—you have not affected the coin at all. So how come you think the coin is more likely heads than tails? Because the coin has affected you.
The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.
The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.
Yep, that was pretty dumb. Thanks for being gentle with me.
However, I still don’t understand what’s wrong with my conclusion in your version of Sleeping Beauty. Upon waking, Sleeping Beauty (whichever copy of her) doesn’t observe anything (colored stones or otherwise) correlated with the result of the coin flip. So it seems she has to stick with her original probability of tails having been flipped, 1⁄2.
Next, out of curiosity, if you had participated in my red/green thought experiment in real life, how would you anticipate if you woke up in a red room (not how would you bet, because I think IRL you’d probably care about copies of you)? I just can’t even physically bring myself to imagine seeing 9,999 copies of me coming out of their respective rooms and telling me they saw red, too, when I had been so confident beforehand that this very situation would not happen. Are you anticipating in the same way as me here?
Finally, let’s pull out the anthropic version of your stones in a bag experiment. Let’s say someone flips an unbiased coin; if it comes up heads, you are knocked out and wake up in a white room, while if it comes up tails, you are knocked out, then copied, and one of you wakes up in a white room and the other wakes up in a black room. Let’s just say the person in each room (or in just the white room if that’s the only one involved) is asked to guess whether the coin came up heads or tails. Let’s also say, for whatever reason, the person has resolved to, if ey wakes up in the white room, guess heads. If ey wakes up in the black room, ey won’t be guessing, ey’ll just be right. Now, if we repeat this experiment multiple times, with different people, it will turn out that, looking at all of the different people (/copies) that actually did wake up in white rooms, it turns out that exactly half of them will have guessed right. Right now I’m just talking about watching this experiment many times from the outside. In fact, it doesn’t matter with what probability the person resolves to guess heads if ey wakes up in the white room—this result holds (that around half of the guesses from white rooms will be correct, in the long run).
Now, given all of that, here’s how I would reason, from the inside of this experiment, if we’re doing log scores in utils (if for some reason I didn’t care about copies of me, which IRL I would) for a probability of heads. Please tell me if you’d reason differently, and why:
In a black room, duh. So let’s say I wake up in a white room. I’d say, well, I only want to maximize my utility. The only way I can be sure to uniquely specify myself, now that I might have been copied, is to say that I am “notsonewuser-in-a-white-room”. Saying “notsonewuser” might not cut it anymore. Historically, when I’ve watched this experiment, “person-in-a-white-room” guesses the coin flip correctly half of the time, no matter what strategy ey has used. So I don’t think I can do better than to say 1⁄2. So I say 1⁄2 and get −1 util (as opposed to an expected −1.08496… utils which I’ve seen historically hold up when I look at all the people in white rooms who have said a 2⁄3 probability of heads).
Now I also need to explain why I think this differs from the obvious situation you brought up (obvious in that the answer was obvious, not in that it wasn’t a good point to make, I think it definitely was!). For one thing, looking historically at people who pick out white stones, they have been in heads-world 2⁄3 of the time. I don’t seem to have any other coherent answer for the difference, though, to be honest (and I’ve already spent hours thinking about this stuff today, and I’m tired). So my reduction’s not quite done, but given the points I’ve made here, I don’t think yours is, either. Maybe you can see flaws in my reasoning, though. Please let me know if you do.
EDIT: I think I figured out the difference. In the situation where you are simply reaching into a bag, the event “I pull out a white stone.” is well defined. In the situation in which you are cloned, the event “I wake up in a white room.” is only well-defined when it is interpreted as “Someone who subjectively experiences being me wakes up in a white room.”, and waking up in a black room is not evidence against the truth of this statement, whereas pulling out a black stone is pretty much absolute evidence that you did not pull out a white stone.
But it leaves open a trap where people accidentally sneak in whatever prior probabilities they want—I think you fall into this on the Sleeping Beauty problem.
I see this as explicitly not happening. nyan_sandwich says:
No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning.
“Other reasoning” including whatever prior probabilities were there before.
As Jaynes (I recommend that link to you) says, not assigning prior probabilities doesn’t mean you don’t have prior probabilities—it just means you have to sneak them in without much examination. In practice, “not having prior probabilities” usually means assigning everything equal prior probability. But it leaves open a trap where people accidentally sneak in whatever prior probabilities they want—I think you fall into this on the Sleeping Beauty problem.
Hmm. Maybe. Can you go into a bit more detail? I’m not seeing it. AFAICT, I’m refusing to assign probability to a meaningless question, and whatever probability I might have assigned to that question has no consequence when you cash out that question to actual meaningful decisions.
I interpret “the presumptuous philosopher is an idiot” as a claim that that the posterior probabilities of the two theories aren’t affected by the number of people produced. Because you exist in each theory, and so you don’t have to update the probability, the conclusion is really a statement about the prior probability you’ve snuck in. This prior probability assigns an equal weight to different possible states of the world, no matter how many people they produce.
But then, in the Sleeping Beauty problem, you use a different unspecified prior, where each person produced gets an equal weight, even if this means giving different weights to different states of the world.
The answer to both of these questions ultimately depends on the prior. But your procedure doesn’t care about the prior—it leaves the user to sneak in whatever prior is their favorite. Thus, different users will sneak in different priors and get different answers.
Yes, of course, but that’s fine; I’m not claiming any particular prior. What I am saying is that the prior is over possible worlds not observer moments, just as it is not over planets. I refuse to assign probabilities between observer moments, and assert that it is entirely unnecessary. If you can show me how I’m nonetheless assigning probability between observer moments by some underhanded scheme, or even where it matters what probabilities I sneak in, go ahead, but I’m still not seeing it.
I’m really confused. What question are you asking? If you’re asking what probability an outsider should assign to the coin coming up heads, the answer’s 1⁄2, if that outsider doesn’t have any information about the coin. nyan_sandwich implies this when ey says
If you’re asking what probability Sleeping Beauty should assign, that depends on what the consequences of making such an assignment is. nyan_sandwich makes this clear, too.
And, finally, if you’re asking for an authoritative “correct” subjective probability for Sleeping Beauty to have, I just don’t think that notion makes sense, as probability is in the mind. In fact in this case if you pushed me I’d say 1⁄2 because as soon as the coin is flipped, it lands, the position is recorded, and Sleeping Beauty waking up and falling asleep in the future can’t go back and change it. Though I’m not that sure that makes sense even here, and I know similar reasoning won’t make sense in more complicated cases. In the end it all comes down to how you count but I’m not sure we have any disagreement on what actually happens during the experiment.
I say (and I think nyan_sandwich would agree), “Don’t assign subjective probabilities in situations where it doesn’t make a difference.” This would be like asking if a tree that fell in a forest made a sound. If you count one way, you get one answer, and if you count another way, you get another. To actually be able to pay off a bet in this situation you need to decide how to count first—that is what differentiates making probability assignments here from other, “standard” situations.
I expect you disagree with something I’ve said here and I’d appreciate it if you flesh it out. I don’t necessarily expect to change your mind and I think it’s a distinct possibility you could change mine.
That’s a good point—this line of reasoning works fine for the original Sleeping Beauty problem, and one can solve it without really worrying what Sleeping Beauty’s subjective probabilities are. That is indeed similar to UDT.
Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can’t add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they’re in the Tails world?
Doesn’t mean there’s not a correct one.
Depends on self-altruism and such concepts. No longer as clear cut. The question comes down to “do you prefer that your copies all get a dollar, or what”
If I need to specify the degree of “self-altruism,” suppose that sleeping beauty is not a human, but is instead a reinforcement-learning robot with no altruism module, self- or otherwise.
OK, if I’m interpreting this right, you mean to say that Sleeping Beauty is put to sleep, and then a coin is flipped. If it comes up tails, she is duplicated; if it comes up heads, nothing additional is done. Then, wake all copies of Sleeping Beauty up. What probability should any particular copy of Sleeping Beauty assign that the coin came up tails? If this is not the question you’re asking, please clarify for me. I know you mentioned betting but let’s just base this on log score and say the return is in utils so that there isn’t any ambiguity. Since you’re saying they don’t add utilities, I’m also going to assume you mean each copy of Sleeping Beauty only cares about herself, locally.
So, given all of that, I don’t see how the answer is anything but 1⁄2. The coin is already flipped, and fell according to the standard laws of physics. Being split or not doesn’t do anything to the coin. Since each copy only cares about herself locally, in fact, why would the answer change? You might as well not copy Sleeping Beauty at all in the tails world, because she doesn’t care about her copies. Her answer is still 1⁄2 (unless of course she knew the coin was weighted, etc.).
I mean, think about it this way. Suppose an event X was about to happen. You are put to sleep. If X happens, 10,000 copies of you are made and put into green rooms, and you are put into a red room. If X does not happen, 10,000 copies of you are made and put into red rooms, and you are put into a green room. Then all copies of you wake up. If I was 99.9% sure beforehand that X was going to happen and woke up in a red room, I’d be 99.9% sure that when I exited that room, I’d see 10,000 copies of me leaving green rooms. And if I woke up in a green room, I’d be 99.9% sure that when I exited that room, I’d see 9,999 copies of me leaving green rooms, and 1 copy of me leaving a red room. Copying me doesn’t go back in time and change what happened. This reminds me of the discussion on Ultimate Newcomb’s Problem, where IIRC some people thought you could change the prime-ness of a number by how you made a choice. That doesn’t work there, and it doesn’t work here, either.
From the outside though, there isn’t a right answer. But, of course, from the inside, yes there is a right answer. From the outside you could count observer moments in a different way and get a different answer, but IRL there’s only what actually happens. That’s what I was trying to get at.
Now I expect I may have misinterpreted your question? But at least tell me if you think I answered my own question correctly, if it wasn’t the same as yours.
You answered the correct question. (yay)
Ok, so you don’t think that I can travel back in time to change the probability of a past event? How about this problem: I flip a coin, and if the coin is heads I put a white stone into a bag. But if the coin is tails, I flip a coin and put one white stone and one black stone into the bag.
You reach into a bag and pull out a stone. It is white. From this, you infer that you are twice as likely to be in heads-world than tails-world. Have you gone back in time and changed the coin?
No—you have not affected the coin at all. So how come you think the coin is more likely heads than tails? Because the coin has affected you.
The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.
Yep, that was pretty dumb. Thanks for being gentle with me.
However, I still don’t understand what’s wrong with my conclusion in your version of Sleeping Beauty. Upon waking, Sleeping Beauty (whichever copy of her) doesn’t observe anything (colored stones or otherwise) correlated with the result of the coin flip. So it seems she has to stick with her original probability of tails having been flipped, 1⁄2.
Next, out of curiosity, if you had participated in my red/green thought experiment in real life, how would you anticipate if you woke up in a red room (not how would you bet, because I think IRL you’d probably care about copies of you)? I just can’t even physically bring myself to imagine seeing 9,999 copies of me coming out of their respective rooms and telling me they saw red, too, when I had been so confident beforehand that this very situation would not happen. Are you anticipating in the same way as me here?
Finally, let’s pull out the anthropic version of your stones in a bag experiment. Let’s say someone flips an unbiased coin; if it comes up heads, you are knocked out and wake up in a white room, while if it comes up tails, you are knocked out, then copied, and one of you wakes up in a white room and the other wakes up in a black room. Let’s just say the person in each room (or in just the white room if that’s the only one involved) is asked to guess whether the coin came up heads or tails. Let’s also say, for whatever reason, the person has resolved to, if ey wakes up in the white room, guess heads. If ey wakes up in the black room, ey won’t be guessing, ey’ll just be right. Now, if we repeat this experiment multiple times, with different people, it will turn out that, looking at all of the different people (/copies) that actually did wake up in white rooms, it turns out that exactly half of them will have guessed right. Right now I’m just talking about watching this experiment many times from the outside. In fact, it doesn’t matter with what probability the person resolves to guess heads if ey wakes up in the white room—this result holds (that around half of the guesses from white rooms will be correct, in the long run).
Now, given all of that, here’s how I would reason, from the inside of this experiment, if we’re doing log scores in utils (if for some reason I didn’t care about copies of me, which IRL I would) for a probability of heads. Please tell me if you’d reason differently, and why:
In a black room, duh. So let’s say I wake up in a white room. I’d say, well, I only want to maximize my utility. The only way I can be sure to uniquely specify myself, now that I might have been copied, is to say that I am “notsonewuser-in-a-white-room”. Saying “notsonewuser” might not cut it anymore. Historically, when I’ve watched this experiment, “person-in-a-white-room” guesses the coin flip correctly half of the time, no matter what strategy ey has used. So I don’t think I can do better than to say 1⁄2. So I say 1⁄2 and get −1 util (as opposed to an expected −1.08496… utils which I’ve seen historically hold up when I look at all the people in white rooms who have said a 2⁄3 probability of heads).
Now I also need to explain why I think this differs from the obvious situation you brought up (obvious in that the answer was obvious, not in that it wasn’t a good point to make, I think it definitely was!). For one thing, looking historically at people who pick out white stones, they have been in heads-world 2⁄3 of the time. I don’t seem to have any other coherent answer for the difference, though, to be honest (and I’ve already spent hours thinking about this stuff today, and I’m tired). So my reduction’s not quite done, but given the points I’ve made here, I don’t think yours is, either. Maybe you can see flaws in my reasoning, though. Please let me know if you do.
EDIT: I think I figured out the difference. In the situation where you are simply reaching into a bag, the event “I pull out a white stone.” is well defined. In the situation in which you are cloned, the event “I wake up in a white room.” is only well-defined when it is interpreted as “Someone who subjectively experiences being me wakes up in a white room.”, and waking up in a black room is not evidence against the truth of this statement, whereas pulling out a black stone is pretty much absolute evidence that you did not pull out a white stone.
I see this as explicitly not happening. nyan_sandwich says:
“Other reasoning” including whatever prior probabilities were there before.