I need more clarification. Sorry. I do think we’re getting somewhere...
The experimenters fix 2 unique constants, k1,k2, each in {1,2,..,20}, sedate you, roll a D20 and flip a coin. If the coin comes up tails, they will wake you on days k1 and k2. If the coin comes up heads and the D20 that comes up is in {k1,k2}, they will wake you on day 1.
Multiple ways to see this:
1)
Under heads, I expect to be woken 1⁄10 of the time
Under tails, I expect to be woken twice.
Hence on the average for every waking after a head I am woken 20 times after a tail.
Ergo 1⁄21.
2)
Internally split the game into 2 single constant games, one for k1 and one for k2. We can simply play them sequentially (with the same die roll). When I am woken I do not know which of the two games I am playing. We both agree that in the single constant game P(H|W) = 1⁄21.
It’s reasonably clear that playing two single constant games in series (with the same die roll and coin flip) reproduces the 2 constant game. The correleation between the roll and flip in the two games doesn’t affect the expectations, and since you have complete uncertainty over which game you’re in (c/o amnesia), the correlation of your current state with a state you have no information on is irrelevant.
P(H|W ∩ game i) = 1⁄21, so P(H|W) = 1⁄21, as the union over all i of (W ∩ game i) is W. At some level this is why I introduced PSB, it seems clearer that this should be the case when the number of wakings is bounded to 1.
3)
Being woken implies either W1 or W2 (currently being woken for the first time or the second time) has occured. In general note that the expected count of something is a probability (and vice versa) if the number of times the event occurs is in {0,1} (trivial using the frequentist def of probability; under the credence view it’s true for betting reasons).
Hence P(H|W1) = 1⁄11, P(H|W2) = 0
You’re woken in 11⁄20 of experiments for the first time and in 1⁄2 of experiments for the second, so P(W1| I am woken) = 11⁄21
P(H | I am woken ) = P(H ∩ W1 | I am woken ) + P(H ∩ W2 | I am woken ) =
P(H | W1 ∩ I am woken).P(W1 | I am woken) + 0 = 1⁄11 . 11⁄21 = 1⁄21.
The issues you’ve raised with this is seem to be that you would either:
Set P(W1 | I am woken) = 1 or
Set P(W1 | T) = P(W2 | T) = 1⁄2 [ so P(H|W1) = 1⁄6 ], and set P(W1 | I am woken) = 6⁄11.
My problem with this is that if P(W1 | I am woken) =/= 11⁄21, you’re poorly calibrated. Your position appears to be that this is because you’re being “forced to make the bet twice in some circumstances but not others”. Hence what you’re doing is clipping the number of times a bet is made to {0,1}, at which point expectation counts of number of outcomes are probabilities of outcomes. I think such an approach is wrong, because the underlying problem is that the counts of event occurences conditional on H or T aren’t constrained to be in {0,1} anymore. This is why I’m not concerned about the “probabilities” being over-unity. Indeed you’d expect them to be over-unity, because the long run number of wakings exceeds the long run number of experiments. In the limit you get well defined over unity probability, under the frequentist view. Betting odds aren’t constrained in [0,1] either, so again you wouldn’t expect credence to stay in [0,1]. It is bounded in [0,2] in SB or your experiment, because the maximum number of winning events in a branch is 2.
As I see it, the 1⁄21 answer (or 1⁄3 in SB) is the only plausible answer because it holds when we stack up multiple runs of the experiment in series or equivalently have uncertainty over which constant is being used in PSB. The 1⁄11 (equiv. 1⁄2) answer doesn’t have this property, as is seen from 1⁄21 going to 1⁄11 from nothing but running two experiments of identical expected behaviour in series...
It seems to me that you are working very hard to justify your solution. It’s a solution by argument/intuition. Why don’t you just do the math?
The experimenters fix 2 unique constants, k1,k2, each in {1,2,..,20}, sedate you, roll a D20 and flip a coin. If the coin comes up tails, they will wake you on days k1 and k2. If the coin comes up heads and the D20 that comes up is in {k1,k2}, they will wake you on day 1.
I just used Bayes rule. W is an awakening. We want to know P(H|W), because the question is about her subjective probability when (if) she is woken up.
To get P(H|W), we need the following:
P(W|H)=2/20 (if heads, wake up if D20 landed on k1 or k2)
P(H)=1/2 (fair coin)
P(W|T)=1 (if tails, woken up regardless of result of coin flip)
With your approach, you avoid directly applying Bayes’ theorem, and you argue that it’s ok for credence to be outside of [0,1]. This suggests to me that you are trying to derive a solution that matches your intuition. My suggestion is to let the math speak, and then to figure out why your intuition is wrong.
You and I both agree on Bayes implying 1⁄21 in the single constant case. Considering the 2 constant game as 2 single constant games in series, with uncertainty over which one (k1 and k2 the mutually exclusive “this is the k1/k2 game”)
This is the logic that to me drives PSB to SB and the 1⁄3 solution. I worked it through in SB by conditioning on the day (slightly different but not substantially).
I have had a realisation. You work directly with W, I work with subsets of W that can only occur at most once in each branch and apply total probability.
Formally, I think what is going on is this: (Working with simple SB) We have a sample space S = {H,T}
“You have been woken” is not an event, in the sense of being a set of experimental outcomes. “You will be woken at least once” is, but these are not the same thing.
“You will be woken at least once” is a nice straightforward event, in the sense of being a set of experimental outcomes {H,T}. “You have been woken” should be considered formally as the multiset {H,T,T}. Formally just working thorough with multisets wherever sets are used as events in probability theory, we recover all of the standard theorems (including Bayes) without issue.
What changes is that since P(S) = 1, and there are multisets X such that X contains S, P(X) > 1.
Hence P({H,T,T}) = 3⁄2; P({H}|{H,T,T}) = 1⁄3.
In the 2 constant PSB setup you suggest, we have S = {H,T} x {1,..,20}
W = {(H,k1),(H,k2), (T,1),(T,1),(T,2),(T,2),....,(T,20),(T,20)}
And P(H|W) = 1⁄21 without issue.
My statement is that this more accurately represents the experimental setup; when you wake, conditioned on all background information, you don’t know how many times you’ve been woken before, but this changes the conditional probabilities of H and T. If you merely use background knowledge of “You have been woken at least once”, and squash all of the events “You are woken for the nth time” into a single event by using union on the events, then you discard information.
This is closely related to my earlier (intuition) that the problem was something to do with linearity.
In sets, union and intersection are only linear when the working on some collection of atomic sets, but are generally linear in multisets. [eg. (A υ B) \ B ≠ A in general in sets]
Observe that the approach I take of splitting “events” down to disjoint things that occur at most once is precisely taking a multiset event apart into well behaved events and then applying probability theory.
What was concerning me is that the true claim that P({H,T}|T) = 1 seemed to discard pertinent information (ie the potential for waking on the second day). With W as the multiset {H,T,T}, P(W|T) = 2. You can regard this as expectation number of times you see Tails, or the extension of probability to multisets.
The difference in approach is that you have to put the double counting of waking given tails in as a boost to payoffs given Tails, which seems odd as from the point of view of you having just been woken you are being offered immediate take-it-or-leave-it odds. This is made clearer by looking at the twins scenario; each person is offered at most one bet.
I need more clarification. Sorry. I do think we’re getting somewhere...
The experimenters fix 2 unique constants, k1,k2, each in {1,2,..,20}, sedate you, roll a D20 and flip a coin. If the coin comes up tails, they will wake you on days k1 and k2. If the coin comes up heads and the D20 that comes up is in {k1,k2}, they will wake you on day 1.
Do you agree that P(H|W)=2/22 in this case?
I do.
No; P(H|W) = 1⁄21
Multiple ways to see this: 1) Under heads, I expect to be woken 1⁄10 of the time Under tails, I expect to be woken twice. Hence on the average for every waking after a head I am woken 20 times after a tail. Ergo 1⁄21.
2) Internally split the game into 2 single constant games, one for k1 and one for k2. We can simply play them sequentially (with the same die roll). When I am woken I do not know which of the two games I am playing. We both agree that in the single constant game P(H|W) = 1⁄21.
It’s reasonably clear that playing two single constant games in series (with the same die roll and coin flip) reproduces the 2 constant game. The correleation between the roll and flip in the two games doesn’t affect the expectations, and since you have complete uncertainty over which game you’re in (c/o amnesia), the correlation of your current state with a state you have no information on is irrelevant.
P(H|W ∩ game i) = 1⁄21, so P(H|W) = 1⁄21, as the union over all i of (W ∩ game i) is W. At some level this is why I introduced PSB, it seems clearer that this should be the case when the number of wakings is bounded to 1.
3) Being woken implies either W1 or W2 (currently being woken for the first time or the second time) has occured. In general note that the expected count of something is a probability (and vice versa) if the number of times the event occurs is in {0,1} (trivial using the frequentist def of probability; under the credence view it’s true for betting reasons).
P(W1 | H) = 1⁄10, P(W2 | H) = 0 P(W1 | T) = 1, P(W2 | T) = 1, from the experimental setup.
Hence P(H|W1) = 1⁄11, P(H|W2) = 0 You’re woken in 11⁄20 of experiments for the first time and in 1⁄2 of experiments for the second, so P(W1| I am woken) = 11⁄21
P(H | I am woken ) = P(H ∩ W1 | I am woken ) + P(H ∩ W2 | I am woken ) = P(H | W1 ∩ I am woken).P(W1 | I am woken) + 0 = 1⁄11 . 11⁄21 = 1⁄21.
The issues you’ve raised with this is seem to be that you would either: Set P(W1 | I am woken) = 1 or Set P(W1 | T) = P(W2 | T) = 1⁄2 [ so P(H|W1) = 1⁄6 ], and set P(W1 | I am woken) = 6⁄11.
My problem with this is that if P(W1 | I am woken) =/= 11⁄21, you’re poorly calibrated. Your position appears to be that this is because you’re being “forced to make the bet twice in some circumstances but not others”. Hence what you’re doing is clipping the number of times a bet is made to {0,1}, at which point expectation counts of number of outcomes are probabilities of outcomes. I think such an approach is wrong, because the underlying problem is that the counts of event occurences conditional on H or T aren’t constrained to be in {0,1} anymore. This is why I’m not concerned about the “probabilities” being over-unity. Indeed you’d expect them to be over-unity, because the long run number of wakings exceeds the long run number of experiments. In the limit you get well defined over unity probability, under the frequentist view. Betting odds aren’t constrained in [0,1] either, so again you wouldn’t expect credence to stay in [0,1]. It is bounded in [0,2] in SB or your experiment, because the maximum number of winning events in a branch is 2.
As I see it, the 1⁄21 answer (or 1⁄3 in SB) is the only plausible answer because it holds when we stack up multiple runs of the experiment in series or equivalently have uncertainty over which constant is being used in PSB. The 1⁄11 (equiv. 1⁄2) answer doesn’t have this property, as is seen from 1⁄21 going to 1⁄11 from nothing but running two experiments of identical expected behaviour in series...
Credence isn’t constrained to be in [0,1]???
It seems to me that you are working very hard to justify your solution. It’s a solution by argument/intuition. Why don’t you just do the math?
I just used Bayes rule. W is an awakening. We want to know P(H|W), because the question is about her subjective probability when (if) she is woken up.
To get P(H|W), we need the following:
P(W|H)=2/20 (if heads, wake up if D20 landed on k1 or k2)
P(H)=1/2 (fair coin)
P(W|T)=1 (if tails, woken up regardless of result of coin flip)
P(T)=1/2 (fair coin)
Using Bayes rule, we get:
P(H|W)=(2/20)(1/2) / [(2/20)(1/2)+(1)*(1/2)] = 1⁄11
With your approach, you avoid directly applying Bayes’ theorem, and you argue that it’s ok for credence to be outside of [0,1]. This suggests to me that you are trying to derive a solution that matches your intuition. My suggestion is to let the math speak, and then to figure out why your intuition is wrong.
You and I both agree on Bayes implying 1⁄21 in the single constant case. Considering the 2 constant game as 2 single constant games in series, with uncertainty over which one (k1 and k2 the mutually exclusive “this is the k1/k2 game”)
P(H | W) = P(H ∩ k1|W) + P(H ∩ k2|W) = P(H | k1 ∩ W)P(k1|W) + P(H|k2 ∩ W)P(k2|W) = 1⁄21 . 1⁄2 + 1⁄21 . 1⁄2 = 1⁄21
This is the logic that to me drives PSB to SB and the 1⁄3 solution. I worked it through in SB by conditioning on the day (slightly different but not substantially).
I have had a realisation. You work directly with W, I work with subsets of W that can only occur at most once in each branch and apply total probability.
Formally, I think what is going on is this: (Working with simple SB) We have a sample space S = {H,T}
“You have been woken” is not an event, in the sense of being a set of experimental outcomes. “You will be woken at least once” is, but these are not the same thing.
“You will be woken at least once” is a nice straightforward event, in the sense of being a set of experimental outcomes {H,T}. “You have been woken” should be considered formally as the multiset {H,T,T}. Formally just working thorough with multisets wherever sets are used as events in probability theory, we recover all of the standard theorems (including Bayes) without issue.
What changes is that since P(S) = 1, and there are multisets X such that X contains S, P(X) > 1.
Hence P({H,T,T}) = 3⁄2; P({H}|{H,T,T}) = 1⁄3.
In the 2 constant PSB setup you suggest, we have S = {H,T} x {1,..,20} W = {(H,k1),(H,k2), (T,1),(T,1),(T,2),(T,2),....,(T,20),(T,20)}
And P(H|W) = 1⁄21 without issue.
My statement is that this more accurately represents the experimental setup; when you wake, conditioned on all background information, you don’t know how many times you’ve been woken before, but this changes the conditional probabilities of H and T. If you merely use background knowledge of “You have been woken at least once”, and squash all of the events “You are woken for the nth time” into a single event by using union on the events, then you discard information.
This is closely related to my earlier (intuition) that the problem was something to do with linearity.
In sets, union and intersection are only linear when the working on some collection of atomic sets, but are generally linear in multisets. [eg. (A υ B) \ B ≠ A in general in sets]
Observe that the approach I take of splitting “events” down to disjoint things that occur at most once is precisely taking a multiset event apart into well behaved events and then applying probability theory.
What was concerning me is that the true claim that P({H,T}|T) = 1 seemed to discard pertinent information (ie the potential for waking on the second day). With W as the multiset {H,T,T}, P(W|T) = 2. You can regard this as expectation number of times you see Tails, or the extension of probability to multisets.
The difference in approach is that you have to put the double counting of waking given tails in as a boost to payoffs given Tails, which seems odd as from the point of view of you having just been woken you are being offered immediate take-it-or-leave-it odds. This is made clearer by looking at the twins scenario; each person is offered at most one bet.