One thing that should be noted is that while Adam’s argument is influential, especially since it first (to my knowledge) pointed out halfers have to either reject Bayesian updating upon learning it is Monday or accept a fair coin yet to be tossed has the probability other than 1⁄2. Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, which has more awakenings. Therefore targeting Adam’s argument specifically is not very effective.
In your incubator experiment, thirders, in general, would find no problem: waking up, evidence favouring tails P(T)=2/3. Finding it is room 1: evidence favouring Heads, P(T) decreased to 1⁄2.
Here is a model that might interest halfers. You participate in this experiment: the experimenter tosses a fair coin, if Heads nothing happens, you sleep through the night uneventfully. If Tails they will split you in the middle into two halves, completing each half by cloning the missing part onto it. The procedure is accurate enough that the memory is preserved in both copies. Imagine yourself waking up the next morning: you can’t tell if anything happened to you, if either of your halves is the same physical piece yesterday, or if there is another physical copy in another room. But regardless, you can participate in the same experiment again. The same thing happens when you find yourself waking up the next day. and so on.....As this continues, you will count about an equal number of Heads and Tails in the experiments you have subjective experiences of...
Counting subjective experience does not necessarily lead to Thirderism.
Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, which has more awakenings. Therefore targeting Adam’s argument specifically is not very effective.
I think there is a weird knot of contradictions hidden there. On one hand, Elga’s mathematical model doesn’t include anything about awakening. But then people rationalize that the update on awakening is the justification why according to this model the probability of a fair coin landing Tails is always 2⁄3, instead of noticing that the model just returns contradictory results.
In your incubator experiment, thirders, in general, would find no problem: waking up, evidence favouring tails P(T)=2/3. Finding it is room 1: evidence favouring Heads, P(T) decreased to 1⁄2.
Which would be a mistake (unless we once again shifted to anthropical motte) because knowing that you are in Room 1 should update you to 2⁄3 Heads as I’ve showed here:
coin_guess = []
for n in range(100000):
room, coin = incubator()
beauty_knows_room1 = (room == 1)
if beauty_knows_room1:
coin_guess.append(coin == 'Heads')
print(coin_guess.count(True)/len(coin_guess)) # 0.6688515435956072
It’s quite curious that “updating on existence” here is equal to not updating on actual evidence. A Thirder who figured out that they are in Room 1 equals to the Halfer who didn’t.
To thirders, your simulation is incomplete. It should first include randomly choosing a room and finding it occupied. That will push the probability of Tails to 2⁄3. Knowing it is room 1 will push it back to 1⁄2.
That’s not it. In your simulation you give equal chances for Head and Tails, and then subdivide Tails into two equiprobables of T1 and T2 while keeping all probability of Heads as H1. It’s essentially a simulation based on SSA. Thirders would say that is the wrong model because it only considers cases where the room is occupied: H2 never appeared in your model. Thirders suggests there is new info when waking up in the experiment because it rejects H2. So the simulation should divide both Head and Tails into equiprobables of H1 H2 T1 T2. And waking up rejects H2 which pushes P(T) to 2⁄3. And then learning it is room 1 would push it back down to 1⁄2.
The other way around. My simulation is based on the experiment as stated. SSA then tries to generalise this principle assuming that all possible experiments are the same—which is clearly wrong.
Thirders position in the incubator-like experiments requires assuming that SBs are randomly sampled, as if God selects a soul from the set of all possible souls and materialise it when a SB is created, and thus thirdism inevitably fails when it’s not the case. I’m going to highlight it in the next post which is exploring multiple similar but a bit different scenarios.
My simulation is based on the experiment as stated.
No.
Your simulations for the regular sleeping beauty problem are good, they acknowledge that multiple sleeping beauty awakenings exist in the event of tails and then weight them in different ways according to different philosophical assumptions about how the weighting should occur.
Your simulation for the incubator version on the other hand, does not acknowledge that there are multiple sleeping beauties in the event of tails, and skips directly to sampling between them according to your personal assumptions.
If you were to do it properly, you would find that it is mathematically equivalent to the regular version, with the same sampling/weighting assumptions available each with the same corresponding answers to the regular version.
Note, mathematically equivalent does not mean philosophically equivalent; one could still be halfer for one and thirder for the other; based on which assumptions you prefer in which circumstances, it’s just that both halfer and thirder assumptions can exist in both cases and will work equivalently.
Your simulation for the incubator version on the other hand, does not acknowledge that there are multiple sleeping beauties in the event of tails, and skips directly to sampling between them according to your personal assumptions.
The function returns your room an the result of the coin toss. It’s enough to determine the existence and if so position of the other Beauty from it. You can also construct thirders scoring rule for the Anthropical Motte without any problem:
coin_guess = []
for n in range(100000):
room, coin = incubator()
coin_guess.append(coin == 'Heads')
if ( coin == Tails ):
coin_guess.append(coin == 'Heads')
print(coin_guess.count(True)/len(coin_guess))
In principle I could’ve added more information on the return for the extra fluff, but the core logic would still be the same.
The incubator code generates a coin toss and a room. If the first coin toss is tails, the room is selected randomly based on a second coin toss, which does not acknowledge that both rooms actually occur in the real experiment, instead baking in your own assumptions about sampling.
Then, your “thirders scoring rule” takes only the first coin toss from the incubator code, throwing out all additional information, to generate a set of observations to be weighted according to thirder assumptions. While this “thirders scoring rule” does correctly reflect thirder assumptions that does not make the original incubator code compatible with thirder assumptions, since all you used from it was the initial coin toss. You could have just written
coin = “Heads” if random.random() >= 0.5 else “Tails”
in place of
room, coin = incubator()
A better version of the incubator code would be to use exactly the same code as for the “classic” version but just substituting “Room 1“ for “Monday” and “Room 2” for “Tuesday”.
A better version of the incubator code would be to use exactly the same code as for the “classic” version but just substituting “Room 1“ for “Monday” and “Room 2” for “Tuesday”
No, absolutely not. Such modelling would destroy all the assymetry between the two experiments which I’m talking about in the post.
In classic the same person experiences both Monday and Tuesday, even if she doesn’t remember it later. In the incubator there are two different people. In classic you can toss a coin after the Monday awakening already happened. In incubator you can’t do it when the first person is already created. That’s why
P(Heads|Monday)=1/2
but
P(Heads|Room1)=2/3
You could have just written
coin = “Heads” if random.random() >= 0.5 else “Tails”
No, I couldn’t. I also need to know which room I am in.
Then, your “thirders scoring rule” takes only the first coin toss from the incubator code, throwing out all additional information, to generate a set of observations to be weighted according to thirder assumptions.
Whatever “additional information” thirders assume there is, it’s either is represented in the result of the coin toss and which room I am in, or their assumptions are not applicable for the incubator experiment.
Anyway here is a more explicit implementation with both scoring rules. :
def incubator(heads_chance=0.5):
if random.random() >= heads_chance: # result of the coin toss
coin = 'Tails'
my_room = 1 if random.random() >= 0.5 else 2 # room sample
other_room = 2 if my_room == 1 else 1
return {'my_room': my_room, 'other_room': other_room}, coin
else:
coin = 'Heads'
my_room = 1
return {'my_room': my_room}, coin
coin_guess = []
for n in range(100000):
rooms, coin = incubator()
my_room = rooms['my_room']
for room in rooms.values():
if room == my_room:
coin_guess.append(coin == 'Heads')
coin_guess = []
for n in range(100000):
rooms, coin = incubator()
for room in rooms.values():
coin_guess.append(coin == 'Heads')
The results are all the same. You in particular has only 50% to guess the result of the coin toss—that’s the bailey. But if you also count the other person guesses—construct the thriders scoring rule you together get 2⁄3 for Tails—that’s the motte.
Such modelling would destroy all the assymetry between the two experiments which I’m talking about in the post.
Exactly. There is no asymmetry (mathematically). I agree in principle that one could make different assumptions in each case, but I think making the same assumptions is probably common without any motte/bailey involved and equivalent assumptions produce mathematically equivalent results.
As this is the key point in my view, I’ll point out how the classic thirder argument is the same/different for the incubator case and relegate brief comments on your specific arguments to a footnote[1].
Here is the classic thirder case as per Wikipedia:
The thirder position argues that the probability of heads is 1⁄3. Adam Elga argued for this position originally[2] as follows: Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted principle of indifference, given that the coin lands tails, her credence that it is Monday should equal her credence that it is Tuesday, since being in one situation would be subjectively indistinguishable from the other. In other words, P(Monday | Tails) = P(Tuesday | Tails), and thus
P(Tails and Tuesday) = P(Tails and Monday).
Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails | Monday) = P(Heads | Monday), and thus
P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday).
Since these three outcomes are exhaustive and exclusive for one trial (and thus their probabilities must add to 1), the probability of each is then 1⁄3 by the previous two steps in the argument.
Here is the above modified to apply to the incubator version. Most straightforwardly would still apply, but I’ve bolded the most questionable step:
The thirder position argues that the probability of heads is 1⁄3. Adam Elga would argue for this position (modified) as follows: Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted principle of indifference, given that the coin lands tails, her credence that she is in Room 1 should equal her credence that she is in Room 2 since being in one situation would be subjectively indistinguishable from the other. In other words, P (Room 1 | Tails) = P(Room 2 | Tails), and thus
P(Tails and Room 1) = P(Tails and Room 2).
Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that she is in Room 1. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails | Room 1) = P(Heads | Room 1), and thus
P(Tails and Room 2) = P(Tails and Room 1) = P(Heads and Room 1).
Since these three outcomes are exhaustive and exclusive for one trial (and thus their probabilities must add to 1), the probability of each is then 1⁄3 by the previous two steps in the argument.
Now, since as you point our you can’t make the decision to add Room 2 later in the incubator experiment as actually written, this bolded step is more questionable than in the classic version. However, one can still make the argument, and there is no contradiction with the classic version—no motte/bailey. I note that you could easily modify the incubator version to add Room 2 later. In that case, Elga’s argument would apply pretty much equivalently to the classic version. Maybe you think changing the timing to make it simultaneous vs. nonsimultaneous should result in different outcomes - that’s fine, your personal opinion—but it’s not irrational for a person to think it doesn’t make a difference!
same person/different people—concept exists in map, not territory. Physical continuity/discontinuity on the other hand does exist in territory, but relevance should be argued not assumed; and certainly if one wants to consider someone irrational for discounting the relevance, that would need a lot of justification!
timing of various events—exists in territory, but again relevance should not be assumed, and need justification to consider discounting it as irrational.
P(Heads|Monday)=1/2
but
P(Heads|Room1)=2/3
You claim; but consistent assumptions could be applied to make them equivalent (such as my above modification of Elga’s argument still being able to apply equivalently in both cases).
No, I couldn’t. I also need to know which room I am in.
You literally just threw away that information (in the “thirders scoring rule” which is where I suggested you could make that replacement)!
Whatever “additional information” thirders assume there is, it’s either is represented in the result of the coin toss and which room I am in, or their assumptions are not applicable for the incubator experiment.
Yes, but… you threw away the room information in your “thirders scoring rule”!
Anyway here is a more explicit implementation with both scoring rules. :
That works. Note that your new thirder scoring rule still doesn’t care what room “you” are in, so your initial sampling (which bakes in your personal assumptions) is rendered irrelevant. The classic code also works, and in my view correctly represents the incubator situation with days changed to rooms.
You in particular
Another case of: concept exists in map, not territory (unless we are already talking about some particular Sleeping Beauty instance).[2]
While I’m on the subject of concepts that exist in the map, not the territory, here’s another one:
Probability (at least the concept of some particular subjective probability being “rational”)
In my view, a claim that some subjective probability is rational amounts to something like claiming that that subjective probability will tend to pay off in some way...which is why I consider it to be ambiguous in Sleeping Beauty since the problem is specifically constructed to avoid any clear payoffs to Sleeping Beauty’s beliefs. FWIW though, I do think that modifications that would favour thirderism (such as Radford Neal’s example of Sleeping Beauty deciding to leave the room) tend to seem more natural to me personally than modifications that would favour halferism. But that’s a judgement call and not enough for me to rule halferism out as irrational.
One thing that should be noted is that while Adam’s argument is influential, especially since it first (to my knowledge) pointed out halfers have to either reject Bayesian updating upon learning it is Monday or accept a fair coin yet to be tossed has the probability other than 1⁄2. Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, which has more awakenings. Therefore targeting Adam’s argument specifically is not very effective.
In your incubator experiment, thirders, in general, would find no problem: waking up, evidence favouring tails P(T)=2/3. Finding it is room 1: evidence favouring Heads, P(T) decreased to 1⁄2.
Here is a model that might interest halfers. You participate in this experiment: the experimenter tosses a fair coin, if Heads nothing happens, you sleep through the night uneventfully. If Tails they will split you in the middle into two halves, completing each half by cloning the missing part onto it. The procedure is accurate enough that the memory is preserved in both copies. Imagine yourself waking up the next morning: you can’t tell if anything happened to you, if either of your halves is the same physical piece yesterday, or if there is another physical copy in another room. But regardless, you can participate in the same experiment again. The same thing happens when you find yourself waking up the next day. and so on.....As this continues, you will count about an equal number of Heads and Tails in the experiments you have subjective experiences of...
Counting subjective experience does not necessarily lead to Thirderism.
I think there is a weird knot of contradictions hidden there. On one hand, Elga’s mathematical model doesn’t include anything about awakening. But then people rationalize that the update on awakening is the justification why according to this model the probability of a fair coin landing Tails is always 2⁄3, instead of noticing that the model just returns contradictory results.
Which would be a mistake (unless we once again shifted to anthropical motte) because knowing that you are in Room 1 should update you to 2⁄3 Heads as I’ve showed here:
It’s quite curious that “updating on existence” here is equal to not updating on actual evidence. A Thirder who figured out that they are in Room 1 equals to the Halfer who didn’t.
To thirders, your simulation is incomplete. It should first include randomly choosing a room and finding it occupied. That will push the probability of Tails to 2⁄3. Knowing it is room 1 will push it back to 1⁄2.
Code for incubator function includes random choice of a room on Tails
That’s not it. In your simulation you give equal chances for Head and Tails, and then subdivide Tails into two equiprobables of T1 and T2 while keeping all probability of Heads as H1. It’s essentially a simulation based on SSA. Thirders would say that is the wrong model because it only considers cases where the room is occupied: H2 never appeared in your model. Thirders suggests there is new info when waking up in the experiment because it rejects H2. So the simulation should divide both Head and Tails into equiprobables of H1 H2 T1 T2. And waking up rejects H2 which pushes P(T) to 2⁄3. And then learning it is room 1 would push it back down to 1⁄2.
The other way around. My simulation is based on the experiment as stated. SSA then tries to generalise this principle assuming that all possible experiments are the same—which is clearly wrong.
Thirders position in the incubator-like experiments requires assuming that SBs are randomly sampled, as if God selects a soul from the set of all possible souls and materialise it when a SB is created, and thus thirdism inevitably fails when it’s not the case. I’m going to highlight it in the next post which is exploring multiple similar but a bit different scenarios.
No.
Your simulations for the regular sleeping beauty problem are good, they acknowledge that multiple sleeping beauty awakenings exist in the event of tails and then weight them in different ways according to different philosophical assumptions about how the weighting should occur.
Your simulation for the incubator version on the other hand, does not acknowledge that there are multiple sleeping beauties in the event of tails, and skips directly to sampling between them according to your personal assumptions.
If you were to do it properly, you would find that it is mathematically equivalent to the regular version, with the same sampling/weighting assumptions available each with the same corresponding answers to the regular version.
Note, mathematically equivalent does not mean philosophically equivalent; one could still be halfer for one and thirder for the other; based on which assumptions you prefer in which circumstances, it’s just that both halfer and thirder assumptions can exist in both cases and will work equivalently.
The function returns your room an the result of the coin toss. It’s enough to determine the existence and if so position of the other Beauty from it. You can also construct thirders scoring rule for the Anthropical Motte without any problem:
In principle I could’ve added more information on the return for the extra fluff, but the core logic would still be the same.
The incubator code generates a coin toss and a room. If the first coin toss is tails, the room is selected randomly based on a second coin toss, which does not acknowledge that both rooms actually occur in the real experiment, instead baking in your own assumptions about sampling.
Then, your “thirders scoring rule” takes only the first coin toss from the incubator code, throwing out all additional information, to generate a set of observations to be weighted according to thirder assumptions. While this “thirders scoring rule” does correctly reflect thirder assumptions that does not make the original incubator code compatible with thirder assumptions, since all you used from it was the initial coin toss. You could have just written
coin = “Heads” if random.random() >= 0.5 else “Tails”
in place of
room, coin = incubator()
A better version of the incubator code would be to use exactly the same code as for the “classic” version but just substituting “Room 1“ for “Monday” and “Room 2” for “Tuesday”.
No, absolutely not. Such modelling would destroy all the assymetry between the two experiments which I’m talking about in the post.
In classic the same person experiences both Monday and Tuesday, even if she doesn’t remember it later. In the incubator there are two different people. In classic you can toss a coin after the Monday awakening already happened. In incubator you can’t do it when the first person is already created. That’s why
P(Heads|Monday)=1/2
but
P(Heads|Room1)=2/3
No, I couldn’t. I also need to know which room I am in.
Whatever “additional information” thirders assume there is, it’s either is represented in the result of the coin toss and which room I am in, or their assumptions are not applicable for the incubator experiment.
Anyway here is a more explicit implementation with both scoring rules. :
The results are all the same. You in particular has only 50% to guess the result of the coin toss—that’s the bailey. But if you also count the other person guesses—construct the thriders scoring rule you together get 2⁄3 for Tails—that’s the motte.
Exactly. There is no asymmetry (mathematically). I agree in principle that one could make different assumptions in each case, but I think making the same assumptions is probably common without any motte/bailey involved and equivalent assumptions produce mathematically equivalent results.
As this is the key point in my view, I’ll point out how the classic thirder argument is the same/different for the incubator case and relegate brief comments on your specific arguments to a footnote[1].
Here is the classic thirder case as per Wikipedia:
Here is the above modified to apply to the incubator version. Most straightforwardly would still apply, but I’ve bolded the most questionable step:
Now, since as you point our you can’t make the decision to add Room 2 later in the incubator experiment as actually written, this bolded step is more questionable than in the classic version. However, one can still make the argument, and there is no contradiction with the classic version—no motte/bailey. I note that you could easily modify the incubator version to add Room 2 later. In that case, Elga’s argument would apply pretty much equivalently to the classic version. Maybe you think changing the timing to make it simultaneous vs. nonsimultaneous should result in different outcomes - that’s fine, your personal opinion—but it’s not irrational for a person to think it doesn’t make a difference!
same person/different people—concept exists in map, not territory. Physical continuity/discontinuity on the other hand does exist in territory, but relevance should be argued not assumed; and certainly if one wants to consider someone irrational for discounting the relevance, that would need a lot of justification!
timing of various events—exists in territory, but again relevance should not be assumed, and need justification to consider discounting it as irrational.
You claim; but consistent assumptions could be applied to make them equivalent (such as my above modification of Elga’s argument still being able to apply equivalently in both cases).
You literally just threw away that information (in the “thirders scoring rule” which is where I suggested you could make that replacement)!
Yes, but… you threw away the room information in your “thirders scoring rule”!
That works. Note that your new thirder scoring rule still doesn’t care what room “you” are in, so your initial sampling (which bakes in your personal assumptions) is rendered irrelevant. The classic code also works, and in my view correctly represents the incubator situation with days changed to rooms.
Another case of: concept exists in map, not territory (unless we are already talking about some particular Sleeping Beauty instance).[2]
While I’m on the subject of concepts that exist in the map, not the territory, here’s another one:
Probability (at least the concept of some particular subjective probability being “rational”)
In my view, a claim that some subjective probability is rational amounts to something like claiming that that subjective probability will tend to pay off in some way...which is why I consider it to be ambiguous in Sleeping Beauty since the problem is specifically constructed to avoid any clear payoffs to Sleeping Beauty’s beliefs. FWIW though, I do think that modifications that would favour thirderism (such as Radford Neal’s example of Sleeping Beauty deciding to leave the room) tend to seem more natural to me personally than modifications that would favour halferism. But that’s a judgement call and not enough for me to rule halferism out as irrational.