You write: However, let’s suppose you picked two sequences 000 and 001 and pre-committed to bet if you saw either of those sequences. Then the odds of betting if tails occurs and the observations are independent would become: 1/4+1/4-1/16 = 7⁄16. This would lead the probability ratio to become 4⁄7. Now, the other two probabilities (always different, always the same) remain the same, but the point is that the probability of heads depends on the number of sequences you pre-commit to guess. If you pre-committed to guess for any sequences, then the probability becomes 1⁄2.
This makes no sense to me. What do you mean by “the odds of betting”? Betting on what? And why are we trying to assign probabilities to Beauty making bets? As a rational agent, she usually makes the correct bets, rather than randomly choosing a bet. And whatever Beauty is betting on, what is the setup regarding what happens if she makes different betting decisions on Monday and Tuesday?
Part of that was a typo I’ve now fixed—I meant to say “guess” instead of “bet”. She is making a guess related to whether the coin came up heads or tails; I haven’t introduced a payoff scheme.
I wasn’t saying that she randomly chooses a bet/guess—just that if she only conditionally guesses we can calculate the odds of her choosing to guess. For example, suppose you toss two coins and I pre-commit to only guess if both will be heads if the first one comes up heads. Then I only guess in half the cases.
I’m assuming that beauty is following a deterministic guessing scheme so this issue doesn’t come up. Instead of thinking about Beauty as a person, we can just as easily make Beauty a computer program.
Also, I edited my comment to now say, “If you pre-committed to guess for regardless of the sequence, then the probability becomes 1/2”
I’m still baffled. Why aren’t we just talking about what probabilities Beauty assigns to various possibilities, at various times? Beauty has nothing much else to do, she can afford to think about what the probabilities should be every time, not just when she observes 1, 1,1, or a coin comes up Heads, or whatever. I suspect that you think her “guessing” (why that word, rather than “assigning a probability”?) only some of the time somehow matters, but I don’t see how...
I’d rather that Beautify not be a computer program. As my original comment discusses, that is not the usual Sleeping Beauty problem. If your answer depends on Beauty being a program, not a person, then it is not an answer to the usual problem.
The point is to clarify what exactly it is that Ksvanhorn calculated. If we had decided at the start that we only cared about cases where Sleeping Beauty experienced <WAKE UP, 1, 1, 1> at least once and we wanted to calculate the probability that the coin would come up heads within this particular scope, then the maths would proceed as per Ksvanhorn’s calculations. Do you disagree?
“Beauty has nothing much else to do, she can afford to think about what the probabilities should be every time”—Well, if she pre-commits to guess in any scope where she wakes up and then experiences any stream of events, then an interview, the probability would end up being 1⁄2.
I explained on another comment that this is just about picking the reference class, which I believe to be necessary for solving anthropic problems: “For example, in anthropic problems there’s often debate about whether our reference class should include all sentient beings or all humans or all humans with a certain level of intellectual ability. Similarly, the question here is whether our reference class is all agents who encounter a boy born on a Tuesday on at least one day or all agents who encounter a boy. I see the second as much more useful, unless you’ll only be offered an option if at least one boy was born on a Tuesday.”
Is the reference class all agents or all agents who experience <WAKE UP, 1, 1, 1>?
Well, I think the whole “reference class” thing is a mistake. By using FNC, one can see that all non-fantastical problems of everyday life that might appear to involve selection effects for which a “reference class” is needed can in fact be solved correctly using standard probability theory, if one doesn’t ignore any evidence. So it’s only the fantastical problems where they might appear useful. But given the fatal flaw that the exact reference class matters, but there is no basis for chosing a particular reference class, the whole concept is of no use for fantastical problems either.
“But given the fatal flaw that the exact reference class matters, but there is no basis for chosing a particular reference class, the whole concept is of no use for fantastical problems either”—well, I plan to write up a post on this soon, but I don’t think that the reference class is as complex as people think for most cases. If you’re deciding whether to take action A, but you need to calculate the probability accounting for anthropic effects, you just consider the population who can take action A.
Well, I guess I’ll have to wait for the details, but off-hand it doesn’t seem that this will work. If action A is “have another child”, and the issue is that you don’t want to do that if the child is going to die if the Earth is destroyed soon in a cataclysm, then the action A is one that can be taken by a wide variety of organisms past and present going back hundreds of millions of years. But many of these you would probably not regard as having an appropriate level of sentience, and some of them that you might regard as sentient seem so different from humans that including them in the reference class seems bizarre. Any sort of line drawn will necessarily be vague, leading to vagueness in probabilities, perhaps by factors of ten or more.
FNC = Full Non-indexical Conditioning, the method I advocate in my paper.
You write: However, let’s suppose you picked two sequences 000 and 001 and pre-committed to bet if you saw either of those sequences. Then the odds of betting if tails occurs and the observations are independent would become: 1/4+1/4-1/16 = 7⁄16. This would lead the probability ratio to become 4⁄7. Now, the other two probabilities (always different, always the same) remain the same, but the point is that the probability of heads depends on the number of sequences you pre-commit to guess. If you pre-committed to guess for any sequences, then the probability becomes 1⁄2.
This makes no sense to me. What do you mean by “the odds of betting”? Betting on what? And why are we trying to assign probabilities to Beauty making bets? As a rational agent, she usually makes the correct bets, rather than randomly choosing a bet. And whatever Beauty is betting on, what is the setup regarding what happens if she makes different betting decisions on Monday and Tuesday?
Part of that was a typo I’ve now fixed—I meant to say “guess” instead of “bet”. She is making a guess related to whether the coin came up heads or tails; I haven’t introduced a payoff scheme.
I wasn’t saying that she randomly chooses a bet/guess—just that if she only conditionally guesses we can calculate the odds of her choosing to guess. For example, suppose you toss two coins and I pre-commit to only guess if both will be heads if the first one comes up heads. Then I only guess in half the cases.
I’m assuming that beauty is following a deterministic guessing scheme so this issue doesn’t come up. Instead of thinking about Beauty as a person, we can just as easily make Beauty a computer program.
Also, I edited my comment to now say, “If you pre-committed to guess for regardless of the sequence, then the probability becomes 1/2”
I’m still baffled. Why aren’t we just talking about what probabilities Beauty assigns to various possibilities, at various times? Beauty has nothing much else to do, she can afford to think about what the probabilities should be every time, not just when she observes 1, 1,1, or a coin comes up Heads, or whatever. I suspect that you think her “guessing” (why that word, rather than “assigning a probability”?) only some of the time somehow matters, but I don’t see how...
I’d rather that Beautify not be a computer program. As my original comment discusses, that is not the usual Sleeping Beauty problem. If your answer depends on Beauty being a program, not a person, then it is not an answer to the usual problem.
The point is to clarify what exactly it is that Ksvanhorn calculated. If we had decided at the start that we only cared about cases where Sleeping Beauty experienced <WAKE UP, 1, 1, 1> at least once and we wanted to calculate the probability that the coin would come up heads within this particular scope, then the maths would proceed as per Ksvanhorn’s calculations. Do you disagree?
“Beauty has nothing much else to do, she can afford to think about what the probabilities should be every time”—Well, if she pre-commits to guess in any scope where she wakes up and then experiences any stream of events, then an interview, the probability would end up being 1⁄2.
I explained on another comment that this is just about picking the reference class, which I believe to be necessary for solving anthropic problems: “For example, in anthropic problems there’s often debate about whether our reference class should include all sentient beings or all humans or all humans with a certain level of intellectual ability. Similarly, the question here is whether our reference class is all agents who encounter a boy born on a Tuesday on at least one day or all agents who encounter a boy. I see the second as much more useful, unless you’ll only be offered an option if at least one boy was born on a Tuesday.”
Is the reference class all agents or all agents who experience <WAKE UP, 1, 1, 1>?
Well, I think the whole “reference class” thing is a mistake. By using FNC, one can see that all non-fantastical problems of everyday life that might appear to involve selection effects for which a “reference class” is needed can in fact be solved correctly using standard probability theory, if one doesn’t ignore any evidence. So it’s only the fantastical problems where they might appear useful. But given the fatal flaw that the exact reference class matters, but there is no basis for chosing a particular reference class, the whole concept is of no use for fantastical problems either.
FNC?
“But given the fatal flaw that the exact reference class matters, but there is no basis for chosing a particular reference class, the whole concept is of no use for fantastical problems either”—well, I plan to write up a post on this soon, but I don’t think that the reference class is as complex as people think for most cases. If you’re deciding whether to take action A, but you need to calculate the probability accounting for anthropic effects, you just consider the population who can take action A.
Well, I guess I’ll have to wait for the details, but off-hand it doesn’t seem that this will work. If action A is “have another child”, and the issue is that you don’t want to do that if the child is going to die if the Earth is destroyed soon in a cataclysm, then the action A is one that can be taken by a wide variety of organisms past and present going back hundreds of millions of years. But many of these you would probably not regard as having an appropriate level of sentience, and some of them that you might regard as sentient seem so different from humans that including them in the reference class seems bizarre. Any sort of line drawn will necessarily be vague, leading to vagueness in probabilities, perhaps by factors of ten or more.
FNC = Full Non-indexical Conditioning, the method I advocate in my paper.