Unlike Jack, I’m pessimistic about your proposal. I’ve already changed my mind not once but twice.
The interesting aspect is that this doesn’t feel like I’m vacillating. I have gone from relying on a vague and unreliable intuition in favor of 1⁄3 qualified with “it depends”, to being moderately certain that 1⁄2 was unambiguously correct, to having worked out how I was allocatingall of the probability mass in the original problem and getting back 1⁄3 as the answer that I cannot help but think is correct. That, plus the meta-observation that no-one, including people I’ve asked directly (including yourself), has a rebuttal to my construction of the table, is leaving me with a higher degree of confidence than I previously had in 1⁄3.
It now feels as if I’m justified to ignore pretty much any argument which is “merely” a verbal appeal to one intuition or the other. Either my formalization corresponds to the problem as verbally stated or it doesn’t; either my math is correct or it isn’t. “Here I stand, I can no other”—at least until someone shows me my mistake.
So I think I figured this whole thing out. Are people familiar with the type-token distinction and resulting ambiguities? If I have five copies of the book Catcher in the Rye and you ask me how many books I have there is an ambiguity. I could say one or five. One refers to the type, “Catcher in the Rye is a coming of age novel” is a sentence about the type. Five refers to the number of tokens, “I tossed Catcher in the Rye onto the bookshelf” is a sentence about the token. The distinction is ubiquitous and leads to occasional confusion, enough that the subject is at the top of my Less Wrong to-do list. The type token distinction becomes an issue whenever we introduce identical copies and the distinction dominates my views on personal identity.
In the Sleeping Beauty case, the amnesia means the experience of waking up on Monday and the experience of waking up on Tuesday, while token-distinct are type-identical. If we decide the right thing to update on isn’t the token experience but the type experience: well the calculations are really easy. The type experience “waking up” has P=1 for heads and tails. So the prior never changes. I think there are some really good reasons for worrying about types rather than tokens in this context but won’t go into until I make sure the above makes sense to someone.
How are you accounting for the fact that—on awakening—beauty has lost information that she previously had—namely that she no longer knows which day of the week it is?
Maybe it’s just because I haven’t thought about this in a couple of weeks but you’re going to have to clarify this. When does beauty know which day of the week it is?
Before consuming the memory-loss drugs she knows her own temporal history. After consuming the drugs, she doesn’t. She is more uncertain—because her memory has been meddled with, and important information has been deleted from it.
Information wasn’t deleted. Conditions changed and she didn’t receive enough information about the change. There is a type (with a single token) that is Beauty before the experiment and that type includes a property ‘knows what day of the week it is’, then the experiment begins and the day changes. During the experiment there is another type which is also Beauty, this type has two tokens. This type only has enough information to narrow down the date to one of two days. But she still knows what day of the week it was when the experiment began, it’s just your usual indexical shift (instead of knowing the date now she knows the date then but it is the same thing).
Oh sure, the information contained in the memory of waking up is lost (though that information didn’t contain what day of the week it was and you said “namely that she no longer knows which day of the week it is”). I still have zero idea of what you’re trying to ask me.
If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.
I might have some issues with that characterization but they aren’t worth going into since I still don’t know what this has to do with my discussion of the type-token ambiguity.
If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.
Yes, counterfactually if she hadn’t been given the drug on the second awakening she would have knowledge of the day. But she was given the drug. This meant a loss of the information and knowledge of the memory of the first awakening. But it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that. It is because all her new experiences keep getting deleted that she is incapable of updating her priors (which were set prior to the beginning of the experiment). In type-theoretic terms:
If the drugs had not been administered she would not have had type experience “waking up” a second time. She would have had type experience “waking up with the memory of waking up yesterday”. If she had had that type experience then she would know what day it is.
Beauty probably knew what day it was before the experiment started. People often do know what day of the week it is.
You don’t seem to respond to my: “Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.”
In this case, that is exactly what happens. Had Beauty not been given the drug, her estimates of p(heads) would be: 0.5 on Monday and 0.0 on Tuesday. Since her knowledge of what day it is has been eliminated by a memory-erasing drug, her probability estimate is intermediate between those figures—reflecting her new uncertainty in the face of the chemical deletion of relevant evidence.
Beauty probably knew what day it was before the experiment started.
Yes. And throughout the experiment she knows what day it was before the experiment started. What she doesn’t know is the new day. This is the second or third time I’ve said this. What don’t you understand about an indexical shift?
“Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.”
The knowledge that Beauty has before the experiment is not deleted. Beauty has a single anticipated experience going into the experiment. That anticipated experience occurs. There is no new information to update on.
You don’t seem to be following what I’m saying at all.
What you said was: “it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that”. Except that she did have that—before the experiment started. Maybe you meant something different—but what readers have to go on is what you say.
Beauty’s memories are deleted. The opinions of an agent can change if they gain information—or if they lose information. Beauty loses information about whether or not she has had a previous awakening and interrogation. She knew that at the start of the experiment, but not during it—so she has lost information that she previously had—it has been deleted by the amnesia-inducing drug. That’s relevant information—and it explains why her priors change.
On Sunday, before the experiment begins Beauty makes observation O1(a). She knows that O1 was made on a Sunday. She says to herself “I know what day it is now” (an indexical statement pointing to O1(a)) She also predicts the coin will flip heads with P=0.5 and predicts the next experience she has after going to sleep will be O2. Then she wakes up and makes observation O2(a). It is Monday but she doesn’t know this because it could just as easily be Tuesday since her memory of waking up on Monday will be erased. “I know what day it is now” is now false, not because knowledge was deleted but because of the indexical shift of ‘now’ which no longer refers to O1(a) but to O2(a). She still knows what day it was at O1(a), that knowledge has not been lost. Then she goes back to sleep and her memory of O2(a) is erased. But O2(a) includes no knowledge of what day it is (thought combined with other information Beauty could have inferred what day it was, she never had that information). Beauty wakes up on Tuesday and has observation O2(b). This observation is type-identical to O2(a) and exactly what she anticipated experiencing. If her memory had not been erased she would have had observation O3-- waking up along with the memory of having woken up the previous day. This would not have been an experience Beauty would have predicted with P=1 and therefore would require her to update her belief P(heads) from 0.5 to 0 as she would know it was Tuesday. But she doesn’t get to do that she just has a token of experience O2. She still knows what day it was at O1(a), no knowledge has been lost. And she still doesn’t know what day it is ‘now’.
[For those following this, note that spatio-temporality is a strictly property of tokens (though we have a linguistic convention of letting types inherit the properties of tokens like “the red-breasted woodpecker can be found in North America”… what that really means is that tokens of they type ‘red-breasted woodpecker’ can be found in North America). This, admittedly, might lead to confusing results that need clarification and I’m still working on that.]
I’ve been following, but I’m still nonplussed as to your use of the type-token distinction in this context. The comment of mine which was the parent for your type-token observation had a specific request: show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
Take a bag with 1 red marble and 9 green marbles. There is a type “green marble” and it has 9 tokens. The experiences of drawing any particular green marble, while token-distinct are type-identical. It seems that what matters when we compute our credence for the proposition “the next marble I draw will be green” is the tokens, not the types. When you formalize the bag problem accordingly, probability theory gives you answers that seem quite robust from a math point of view.
If you start out ignorant of how many marbles the bag has of each color, you can ask questions like “given that I just took two green marbles in a row, what is my credence in the proposition ‘the next marble I draw will be green’”. You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation. (Which of course is only a convention of this kind of exercise: with precise enough instruments we could distinguish all ten individual marbles.)
Statements like “information is gained” or “information is lost” are vague and imprecise, with the consequence that a motivated interpretation of the problem statement will support whichever statement we happen to favor. The point of formalizing probability is precisely that we get to replace such vague statements with precisely quantifiable formalizations, which leave no wiggle room for interpretation.
If you have a formalism which shows, in that manner, why the answer to the Sleeping Beauty question is 1⁄2, I would love to see it: I have no attachment any longer to “my opinion” on the topic.
My questions to you, then, are: a) given your reasons for “worrying about types rather than tokens” in this situation, how do you formally quantify your uncertainty over various propositions, as I do in the spreadsheet I’ve linked to earlier? b) what justifies “worrying about types rather than tokens” in this situation, where every other discussion of probability “worries about tokens” in the sense I’ve outlined above in reference to the bag of marbles? c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
My point was that I didn’t think anything was wrong with your math. If you count tokens the answer you get is 1⁄3. If you count types the answer you get is 1⁄2 (did you need more math for that?). Similarly, you can design payouts where the right choice is 1⁄3 and payouts where the right choice is 1⁄2.
You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation.
b) what justifies “worrying about types rather than tokens” in this situation, where every other discussion of probability “worries about tokens” in the sense I’ve outlined above in reference to the bag of marbles?
This was a helpful comment for me. What we’re dealing with is actually a special case of the type-token ambiguity: the tokens are actually indistinguishable. Say I flip a coin. I, If tails I put six red marbles in a bag which already contains three red marbles bag, if heads do nothing to the bag with three red marbles. I draw a marble and tell Beauty “red”. And then I ask Beauty her credence for the coin landing heads. I think that is basically isomorphic to the Sleep Beauty problem. In the original she is woken up twice if heads, but thats just like having more red marbles to choose from, the experiences are indistinguishable just like the marbles.
Statements like “information is gained” or “information is lost” are vague and imprecise,
I don’t really think they are. That’s my major problem with the 1⁄3 answer. No one has ever shown me the unexpected experience Beauty must have to update from 0.5. But if you feel that way I’ll try other methods.
c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
Off hand there is no reason to worry about types, as the possible answers to the questions “Do you have exactly two children?” and “Is one of them a boy born on a Tuesday?” are all distinguishable. But I haven’t thought really hard about that problem, maybe there is something I’m missing. My approach does suggest a reason for why the Self-Indication Assumption is wrong: the necessary features of an observer are indistinguishable. So it returns 0.5 for the Presumptuous Philosopher problem.
I’ll come back with an answer to (a). Bug me about it if I don’t. There is admittedly a problem which I haven’t worked out: I’m not sure how to relate the experience-type to the day of the week (time is a property of tokens). Basically, the type by itself doesn’t seem to tell us anything about the day (just like picking the red marble doesn’t tell us whether or not it was added after the coin flip. And maybe that’s a reason to reject my approach. I don’t know.
Memories are knowledge—they are knowledge about past perceptions. They have been lost—because they have been chemically deleted by the amnesia-inducing drug. If they had not been lost, Beauty’s probability estimates would be very different at each interview—so evidently the lost information was important in influencing Beauty’s probability estimates.
That should be all you need to know to establish that the deletion of Beauty’s memories changes her priors, and thereby alters her subjective probability estimates. Beauty awakens, not knowing if she has previously been interviewed—because of her memory loss. She knew whether she had previously been interviewed at the start of the experiment—she hadn’t. So: that illustrates which memories have been deleted, and why her uncertainty has increased.
Yes. The memories have been lost (and the knowledge that accompanies them). The knowledge of what day of the week it is has not been lost because she never had this… as I’ve said four times. I’m just going to keep referring you back to my previous comments because I’ve addressed all this already.
You seem to have got stuck on this “day of the week” business :-(
The point is that beauty has lost knowledge that she once had—and that is why her priors change. That that knowledge is “what day of the week it currently is” seems like a fine way of thinking about what information beauty loses to me. However, it clearly bugs you—so try thinking about the lost knowledge another way: beauty starts off knowing with a high degree of certainty whether or not she has previously been interviewed—but then she loses this information as the experiment progresses—and that is why her priors change.
This example, like the last one, is indexed to a specific time. You don’t lose knowledge about conditions at t1 just because it is now t2 and the conditions are different.
Beauty loses information about whether she has previously attended interviews because her memories of them are chemically deleted by an amnesia-inducing drug—not because it is later on.
Cool. Now I haven’t quite thought through all this so it’ll be a little vague. It isn’t anywhere close to being an analytic, formalized argument. I’m just going to dump a bunch of examples that invite intuitions. Basically the notion is: all information is type, not token. Consider, to begin with the Catcher in the Rye example. The sentence about the type was about the information contained in the book. This isn’t a coincidence. The most abundant source of types in the history of the world is pure information: not just every piece of text every written but every single computer program or file is a type (with it’s backups and copies as tokens). Our entire information-theoretic understanding of the universe involves this notion of writing the universe like a computer program (with the possibility of running multiple simulations), k-complexity is a fact about types not tokens (of course this is confusing since when we think of tokens we often attribute them the features of the their type, but the difference is there). Persons are types (at least in part, I think our concept of personhood confuses types and tokens). That’s why most people here think they could survive by being uploaded. When Dennett swtiches between his two brains it seems like there is only one person because there is only one person-type, though two person-tokens. I forget who it was, but someone here has argued in regard to decision theory, that we when we act we should take into account all the simulations of us that may some day be run and act for them as well. This is merely decision theory representing the fact that what matters about persons is the type.
So if agents are types, and in particular if information is types… well then they type experiences are what we update on, they’re the ones that contain information. There is no information to tokens beyond their type. RIght? Of course, this is just an intuition that needs to be formalized. But is the intuition clear?
I’m sorry this isn’t better formulated. The complexity justifies a top level post which I don’t have time for until next week.
Entertainingly, I feel justified in ignoring your argument and most of the others for the same reason you feel justified in ignoring other arguments.
I got into a discussion about the SB problem a month ago after Mallah mentioned it as related to the red door/blue doors problem. After a while I realized I could get either of 1⁄2 or 1⁄3 as an answer, despite my original intuition saying 1⁄2.
I confirmed both 1⁄2 and 1⁄3 were defensible by writing a computer program to count relative frequencies two different ways. Once I did that, I decided not to take seriously any claims that the answer had to be one or the other, since how could a simple argument overrule the result of both my simple arithmetic and a computer simulation?
A higher level of understanding of an initially mysterious question should translate into knowing why people may disagree, and still insist on answers that you yourself have discarded. You explain away their disagreement as an inferential distance.
Neither of the answers you have arrived at is correct, from my perspective, and I can explain why. So I feel justified in ignoring your argument for ignoring my argument. :)
That a simulation program should compute 1⁄2 for “how many times on average the coin comes up heads per time it is flipped” is simply P(x) in my formalization. It’s a correct but entirely uninteresting answer to something other than the problem’s question.
That your program should compute 1⁄3 for “how many times on average the coin comes up heads per time Beauty is awoken” is also a correct answer to a slightly more subtly mistaken question. If you look at the “Halfer variant” page of my spreadsheet, you will see a probability distribution that also correspond to the same “facts” that yield the 1⁄3 answer, and yet applying the laws of probability to that distribution give Beauty a credence of 1⁄2. The question your program computes an answer to is not the question “what is the marginal probability of x=Heads, conditioning on z=Woken”.
Whereas, from the tables representing the joint probability distribution, I think I now ought to be able to write a program which can recover either answer: the Thirder answer by inputting the “right” model or the Halfer answer by inputting the “wrong” model. In the Halfer model, we basically have to fail to sample on Heads/Tuesday. Commenting out one code line might be enough.
ETA: maybe not as simple as that, now that I have a first cut of the program written; we’d need to count awakenings on monday twice, which makes no sense at all. It does look as if our programs are in fact computing the same thing to get 1⁄3.
Which specific formulation of the Sleeping Beauty problem did you use to work things out? Maybe we’re referring to descriptions of the problem that use different wording; I’ve yet to read a description that’s convinced me that 1⁄2 is an answer to the wrong question. For example, here’s the wiki’s description asks
Beauty wakes up in the experiment and is asked, “With what subjective probability do believe that the coin landed tails?”
Personally, I believe that using the word ‘subjective’ doesn’t add anything here (it just sounds like a cue to think Bayesian-ishly to me, which doesn’t change the actual answer). So I read the question as asking for the probability of the coin landing tails given the experiment’s setup. As it’s asking for a probabiliy, I see it as wholly legitimate to answer it along the lines of ‘how many times on average the coin comes up heads per X,’ where X is one of the two choices you mentioned.
If you ignore the specification that it is Beauty’s subjective probability under discussion, the problem becomes ill-defined—and multiple answers become defensible—depending on whose perspective we take.
The word ‘subjective’ before the word ‘probability’ is empty verbiage to me, so (as I see it) it doesn’t matter whether you or I have subjectivity in mind. The problem’s ill-defined either way; ‘the specification that it is Beauty’s subjective probability’ makes no difference to me.
“In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1⁄3. This is the correct answer from Beauty’s perspective. Yet to the experimenter the correct probability is 1⁄2.”
I think it’s not the change in perspective or subjective identity making a difference, but instead it’s a change in precisely which probability is being asked about. The Wikipedia page unhelpfully conflates the two changes.
It says that the experimenter must see a probability of 1⁄2 and Beauty must see a probability of 1⁄3, but that just ain’t so; there is nothing stopping Beauty from caring about the proportion of coin flips that turn out to be heads (which is 1⁄2), and there is nothing stopping the experimenter from caring about the proportion of wakings for which the coin is heads (which is 1⁄3). You can change which probability you care about without changing your subjective identity and vice versa.
Let’s say I’m Sleeping Beauty. I would interpret the question as being about my estimate of a probability (‘credence’) associated with a coin-flipping process. Having interpreted the question as being about that process, I would answer 1⁄2 - who I am would have nothing to do with the question’s correct answer, since who I am has no effect on the simple process of flipping a fair coin and I am given no new information after the coin flip about the coin’s state.
“What is your credence now for the proposition that our coin landed heads?”
That’s fairly clearly the PROBABILITY NOW of the coin having landed heads—and not the PROPORTION that turn out AT SOME POINT IN THE FUTURE to have landed heads.
Perspective can make a difference—because different observers have different levels of knowledge about the situation. In this case, Beauty doesn’t know whether it is Tuesday or not—but she does know that if she is being asked on Tuesday, then the coin came down tails—and p(heads) is about 0.
In the original problem post, Beauty is asked a specific question, though
It’s not specific enough. It only asks for Beauty’s credence of a coin landing heads—it doesn’t tell her to choose between the credence of a coin landing heads given that it is flipped and the credence of a coin landing heads given a single waking. The fact that it’s Beauty being asked does not, in and of itself, mean the question must be asking the latter probability. It is wholly reasonable for Beauty to interpret the question as being about a coin-flipping process for which the associated probability is 1⁄2.
That’s fairly clearly the PROBABILITY NOW of the coin having landed heads—and not the PROPORTION that turn out AT SOME POINT IN THE FUTURE to have landed heads.
The addition of the word ‘now’ doesn’t magically ban you from considering a probability as a limiting relative frequency.
Perspective can make a difference—because different observers have different levels of knowledge about the situation. In this case, Beauty doesn’t know whether it is Tuesday or not
Agree.
- but she does know that if she is being asked on Tuesday, then the coin came down tails—and p(heads) is about 0.
It’s not clear to me how this conditional can be informative from Beauty’s perspective, as she doesn’t know whether it’s Tuesday or not. The only new knowledge she gets is that she’s woken up; but she has an equal probability (i.e. 1) of getting evidence of waking up if the coin’s heads or if the coin’s tails. So Beauty has no more knowledge than she did on Sunday.
She has LESS knowledge than she had on Sunday in one critical area—because now she doesn’t know what day of the week it is. She may not have learned much—but she has definitely forgotten something—and forgetting things changes your estimates of their liklihood just as much as learning about them does.
She has LESS knowledge than she had on Sunday in one critical area—because now she doesn’t know what day of the week it is. She may not have learned much—but she has definitely forgotten something -
That’s true.
and forgetting things changes your estimates of their liklihood just as much as learning about them does.
I’m not as sure about this. It’s not clear to me how it changes the likelihoods if I sketch Beauty’s situation at time 1 and time 2 as
A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday. It is Sunday.
I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.
as opposed to just
A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday.
I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.
(Edit to clarify—the 2nd pair of statements is meant to represent roughly how I was thinking about the setup when writing my earlier comment. That is, it’s evident that I didn’t account for Beauty forgetting what day of the week it is in the way timtyler expected, but at the same time I don’t believe that made any material difference.)
I read it as “What is your credence”, which is supposed to be synonymous with “subjective probability”, which—and this is significant—I take to entail that Beauty must condition on having been woken (because she conditions on every piece of information known to her).
In other words, I take the question to be precisely “What is the probability you assign to the coin having come up heads, taking into account your uncertainty as to what day it is.”
Ahhhh, I think I understand a bit better now. Am I right in thinking that your objection is not that you disapprove of relative frequency arguments in themselves, but that you believe the wrong relative frequency/frequencies is/are being used?
Right up until your reply prompted me to write a program to check your argument, I wasn’t thinking in terms of relative frequencies at all, but in terms of probability distributions.
I haven’t learned the rules for relative frequencies yet (by which I mean thing like “(don’t) include counts of variables that have a correlation of 1 in your denominator”), so I really have no idea.
Here is my program—which by the way agrees with neq1′s comment here, insofar as the “magic trick” which will recover 1⁄2 as the answer consists of commenting out the TTW line.
However, this seems perfectly nonsensical when transposed to my spreadsheet: zeroing out the TTW cell at all means I end up with a total probability mass less than 1. So, I can’t accept at the moment that neq1′s suggestion accords with the laws of probability—I’d need to learn what changes to make to my table and why I should make them.
from random import shuffle, randint
flips=1000
HEADS=0
TAILS=1
# individual cells
HMW = HTW = HMD = HTD = 0.0
TMW = TTW = TMD = TTD = 0.0
def run_experiment():
global HMW, HTW, HMD, HTD, TMW, TTW, TMD, TTD
coin = randint(HEADS,TAILS)
if (coin == HEADS):
# wake Beauty on monday
HMW+=1
# drug Beauty on Tuesday
HTD+=1
if (coin == TAILS):
# wake Beauty on monday
TMW+=1
# wake Beauty on Tuesday too
TTW+=1
for i in range(flips):
run_experiment()
print "Total samples where heads divided by total samples ~P(H):",(HMW+HTW+HMD+HTD)/(HMW+HTW+HMD+HTD+TMW+TTW+TMD+TTD)
print "Total samples where woken F(W):",HMW+HTW+TMW+TTW
print "Total samples where woken and heads F(W&H):", HMW+HTW
print "P(W&H)=P(W)P(H|W), so P(H|W)=lim F(W&H)/F(W)"
print "Total samples where woken and heads divided by sample where woken F(H|W):", (HMW+HTW)/(HMW+HTW+TMW+TTW)
Replying again since I’ve now looked at the spreadsheet.
Using my intuition (which says the answer is 1⁄2), I would expect P(Heads, Tuesday, Not woken) + P(Tails, Tuesday, Not woken) > 0, since I know it’s possible for Beauty to not be woken on Tuesday. But the ‘halfer “variant”’ sheet says P(H, T, N) + P(T, T, N) = 0 + 0 = 0, so that sheet’s way of getting 1⁄2 must differ from how my intuition works.
(ETA—Unless I’m misunderstanding the spreadsheet, which is always possible.)
Your program looks good here, your code looks a lot like mine, and I ran it and got ~1/2 for P(H) and ~1/3 for F(H|W). I’ll try and compare to your spreadsheet.
Even in the limit not all relative frequencies are probabilities. In fact, I’m quite sure that in the limit ntails/wakings is not a probability. That’s because you don’t have independent samples of wakings.
Basically, the 2 wakings on tails should be thought of as one waking. You’re just counting the same thing twice. When you include counts of variables that have a correlation of 1 in your denominator, it’s not clear what you are getting back. The thirders are using a relative frequency that doesn’t converge to a probability
Basically, the 2 wakings on tails should be thought of as one waking. You’re just counting the same thing twice.
This is true if we want the ratio of tails to wakings. However...
When you include counts of variables that have a correlation of 1 in your denominator, it’s not clear what you are getting back. The thirders are using a relative frequency that doesn’t converge to a probability
Despite the perfect correlation between some of the variables, one can still get a probability back out—but it won’t be the probability one expects.
Maybe one day I decide I want to know the probability that a randomly selected household on my street has a TV. I print up a bunch of surveys and put them in people’s mailboxes. However, it turns out that because I am very absent-minded (and unlucky), I accidentally put two surveys in the mailboxes of people with a TV, and only one in the mailboxes of people without TVs. My neighbors, because they enjoy filling out surveys so much, dutifully fill out every survey and send them all back to me. Now the proportion of surveys that say ‘yes, I have a TV’ is not the probability I expected (the probability of a household having a TV) - but it is nonetheless a probability, just a different one (the probability of any given survey saying, ‘I have a TV’).
That’s a good example. There is a big difference though (it’s subtle). With sleeping beauty, the question is about her probability at a waking. At a waking, there are no duplicate surveys. The duplicates occur at the end.
That is a difference, but it seems independent from the point I intended the example to make. Namely, that a relative frequency can still represent a probability even if its denominator includes duplicates—it will just be a different probability (hence why one can get 1⁄3 instead of 1⁄2 for SB).
This is strange. It sounds like you have been making progress towards settling on an answer, after discussion with others. That would suggest to me that discussion can move us towards consensus.
I like your approach a lot. It’s the first time I’ve seen the thirder argument defended with actually probability statements. Personally, I think there shouldn’t be any probability mass on ‘not woken’, but that is something worth thinking about and discussing.
One thing that I think is odd. Thirders know she has nothing to update on when she is woken, because they admit she will give the same answer, regardless of if it’s heads or tails. If she really had new information that is correlated with the outcome, her credence would move towards heads when heads, and tails when tails.
Consider my cancer intuition pump example. Everyone starts out thinking there is a 50% chance they have cancer. Once woken, regarldess of if they have cancer or not, they all shift to 90%. Did they really learn anything about their disease state by being woken? If they did, those with cancer would have shifted their credence up a bit, and those without would have shifted down. That’s what updating is.
In your example the experimenter has learned whether you have cancer. And she reflects that knowledge in the structure of the experiment: you are woken up 9 times if you have the disease.
Set aside the amnesia effects of the drug for a moment, and consider the experimental setup as a contorted way of imparting the information to the patient. Then you’d agree that with full memory, the patient would have something to update on? As soon as the second day. So there is, normally, an information flow in this setup.
What the amnesia does is selectively impair the patient’s ability to condition on available information. it does that in a way which is clearly pathological, and results in the counter-intuitive reply to the question “conditioning on a) your having woken up and b) your inability to tell what day it is, what is your credence”? We have no everyday intuitions about the inferential consequences of amnesia.
Knowing about the amnesia, we can argue that Beauty “shouldn’t” condition on being woken up. But if she does, she’ll get that strange result. If she does have cancer, she is more likely to be woken up multiple times than once, and being woken up at all does have some evidential weight.
All this, though, being merely verbal aids as I try to wrap my head around the consequences of the math. And therefore to be taken more circumspectly than the math itself.
If she does condition on being woken up, I think she still gets 1⁄2. I hate to keep repeating arguments, but what she knows when she is woken up is that she has been woken up at least once. If you just apply Bayes rule, you get 1⁄2.
If conditioning causes her to change her probability, it should do so in such a way that makes her more accurate. But as we see in the cancer problem, people with cancer give the same answer as people without.
Then you’d agree that with full memory, the patient would have something to update on?
Yes, but then we wouldn’t be talking about her credence on an awakening. We’d be talking about her credence on first waking and second waking. We’d treat them separately. With amnesia, 2 wakings are the same as 1. It’s really just one experience.
I’m not sure what more I can say without starting to repeat myself, too. All I can say at this point, having formalized my reasoning as both a Python program and an analytical table giving out the full joint distribution, is “Where did I make a mistake?”
Where’s the bug in the Python code? How do I change my joint distribution?
I like the version of your halfer variant version of your table. I still need to think about your distributions more though. I’m not sure it makes sense to have a variable ‘woken that day’ for this problem.
Unlike Jack, I’m pessimistic about your proposal. I’ve already changed my mind not once but twice.
The interesting aspect is that this doesn’t feel like I’m vacillating. I have gone from relying on a vague and unreliable intuition in favor of 1⁄3 qualified with “it depends”, to being moderately certain that 1⁄2 was unambiguously correct, to having worked out how I was allocating all of the probability mass in the original problem and getting back 1⁄3 as the answer that I cannot help but think is correct. That, plus the meta-observation that no-one, including people I’ve asked directly (including yourself), has a rebuttal to my construction of the table, is leaving me with a higher degree of confidence than I previously had in 1⁄3.
It now feels as if I’m justified to ignore pretty much any argument which is “merely” a verbal appeal to one intuition or the other. Either my formalization corresponds to the problem as verbally stated or it doesn’t; either my math is correct or it isn’t. “Here I stand, I can no other”—at least until someone shows me my mistake.
So I think I figured this whole thing out. Are people familiar with the type-token distinction and resulting ambiguities? If I have five copies of the book Catcher in the Rye and you ask me how many books I have there is an ambiguity. I could say one or five. One refers to the type, “Catcher in the Rye is a coming of age novel” is a sentence about the type. Five refers to the number of tokens, “I tossed Catcher in the Rye onto the bookshelf” is a sentence about the token. The distinction is ubiquitous and leads to occasional confusion, enough that the subject is at the top of my Less Wrong to-do list. The type token distinction becomes an issue whenever we introduce identical copies and the distinction dominates my views on personal identity.
In the Sleeping Beauty case, the amnesia means the experience of waking up on Monday and the experience of waking up on Tuesday, while token-distinct are type-identical. If we decide the right thing to update on isn’t the token experience but the type experience: well the calculations are really easy. The type experience “waking up” has P=1 for heads and tails. So the prior never changes. I think there are some really good reasons for worrying about types rather than tokens in this context but won’t go into until I make sure the above makes sense to someone.
How are you accounting for the fact that—on awakening—beauty has lost information that she previously had—namely that she no longer knows which day of the week it is?
Maybe it’s just because I haven’t thought about this in a couple of weeks but you’re going to have to clarify this. When does beauty know which day of the week it is?
Before consuming the memory-loss drugs she knows her own temporal history. After consuming the drugs, she doesn’t. She is more uncertain—because her memory has been meddled with, and important information has been deleted from it.
Information wasn’t deleted. Conditions changed and she didn’t receive enough information about the change. There is a type (with a single token) that is Beauty before the experiment and that type includes a property ‘knows what day of the week it is’, then the experiment begins and the day changes. During the experiment there is another type which is also Beauty, this type has two tokens. This type only has enough information to narrow down the date to one of two days. But she still knows what day of the week it was when the experiment began, it’s just your usual indexical shift (instead of knowing the date now she knows the date then but it is the same thing).
Her memories were DELETED. That’s the whole point of the amnesia-inducing drug.
Amnesia = memory LOSS: http://dictionary.reference.com/browse/Amnesia
Oh sure, the information contained in the memory of waking up is lost (though that information didn’t contain what day of the week it was and you said “namely that she no longer knows which day of the week it is”). I still have zero idea of what you’re trying to ask me.
If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.
I might have some issues with that characterization but they aren’t worth going into since I still don’t know what this has to do with my discussion of the type-token ambiguity.
It is what was missing from this analysis:
“The type experience “waking up” has P=1 for heads and tails. So the prior never changes.”
Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.
K.
Yes, counterfactually if she hadn’t been given the drug on the second awakening she would have knowledge of the day. But she was given the drug. This meant a loss of the information and knowledge of the memory of the first awakening. But it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that. It is because all her new experiences keep getting deleted that she is incapable of updating her priors (which were set prior to the beginning of the experiment). In type-theoretic terms:
If the drugs had not been administered she would not have had type experience “waking up” a second time. She would have had type experience “waking up with the memory of waking up yesterday”. If she had had that type experience then she would know what day it is.
Beauty probably knew what day it was before the experiment started. People often do know what day of the week it is.
You don’t seem to respond to my: “Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.”
In this case, that is exactly what happens. Had Beauty not been given the drug, her estimates of p(heads) would be: 0.5 on Monday and 0.0 on Tuesday. Since her knowledge of what day it is has been eliminated by a memory-erasing drug, her probability estimate is intermediate between those figures—reflecting her new uncertainty in the face of the chemical deletion of relevant evidence.
Yes. And throughout the experiment she knows what day it was before the experiment started. What she doesn’t know is the new day. This is the second or third time I’ve said this. What don’t you understand about an indexical shift?
The knowledge that Beauty has before the experiment is not deleted. Beauty has a single anticipated experience going into the experiment. That anticipated experience occurs. There is no new information to update on.
You don’t seem to be following what I’m saying at all.
What you said was: “it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that”. Except that she did have that—before the experiment started. Maybe you meant something different—but what readers have to go on is what you say.
Beauty’s memories are deleted. The opinions of an agent can change if they gain information—or if they lose information. Beauty loses information about whether or not she has had a previous awakening and interrogation. She knew that at the start of the experiment, but not during it—so she has lost information that she previously had—it has been deleted by the amnesia-inducing drug. That’s relevant information—and it explains why her priors change.
I’m going to try this one more time.
On Sunday, before the experiment begins Beauty makes observation O1(a). She knows that O1 was made on a Sunday. She says to herself “I know what day it is now” (an indexical statement pointing to O1(a)) She also predicts the coin will flip heads with P=0.5 and predicts the next experience she has after going to sleep will be O2. Then she wakes up and makes observation O2(a). It is Monday but she doesn’t know this because it could just as easily be Tuesday since her memory of waking up on Monday will be erased. “I know what day it is now” is now false, not because knowledge was deleted but because of the indexical shift of ‘now’ which no longer refers to O1(a) but to O2(a). She still knows what day it was at O1(a), that knowledge has not been lost. Then she goes back to sleep and her memory of O2(a) is erased. But O2(a) includes no knowledge of what day it is (thought combined with other information Beauty could have inferred what day it was, she never had that information). Beauty wakes up on Tuesday and has observation O2(b). This observation is type-identical to O2(a) and exactly what she anticipated experiencing. If her memory had not been erased she would have had observation O3-- waking up along with the memory of having woken up the previous day. This would not have been an experience Beauty would have predicted with P=1 and therefore would require her to update her belief P(heads) from 0.5 to 0 as she would know it was Tuesday. But she doesn’t get to do that she just has a token of experience O2. She still knows what day it was at O1(a), no knowledge has been lost. And she still doesn’t know what day it is ‘now’.
[For those following this, note that spatio-temporality is a strictly property of tokens (though we have a linguistic convention of letting types inherit the properties of tokens like “the red-breasted woodpecker can be found in North America”… what that really means is that tokens of they type ‘red-breasted woodpecker’ can be found in North America). This, admittedly, might lead to confusing results that need clarification and I’m still working on that.]
I’ve been following, but I’m still nonplussed as to your use of the type-token distinction in this context. The comment of mine which was the parent for your type-token observation had a specific request: show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
Take a bag with 1 red marble and 9 green marbles. There is a type “green marble” and it has 9 tokens. The experiences of drawing any particular green marble, while token-distinct are type-identical. It seems that what matters when we compute our credence for the proposition “the next marble I draw will be green” is the tokens, not the types. When you formalize the bag problem accordingly, probability theory gives you answers that seem quite robust from a math point of view.
If you start out ignorant of how many marbles the bag has of each color, you can ask questions like “given that I just took two green marbles in a row, what is my credence in the proposition ‘the next marble I draw will be green’”. You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation. (Which of course is only a convention of this kind of exercise: with precise enough instruments we could distinguish all ten individual marbles.)
Statements like “information is gained” or “information is lost” are vague and imprecise, with the consequence that a motivated interpretation of the problem statement will support whichever statement we happen to favor. The point of formalizing probability is precisely that we get to replace such vague statements with precisely quantifiable formalizations, which leave no wiggle room for interpretation.
If you have a formalism which shows, in that manner, why the answer to the Sleeping Beauty question is 1⁄2, I would love to see it: I have no attachment any longer to “my opinion” on the topic.
My questions to you, then, are: a) given your reasons for “worrying about types rather than tokens” in this situation, how do you formally quantify your uncertainty over various propositions, as I do in the spreadsheet I’ve linked to earlier? b) what justifies “worrying about types rather than tokens” in this situation, where every other discussion of probability “worries about tokens” in the sense I’ve outlined above in reference to the bag of marbles? c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
My point was that I didn’t think anything was wrong with your math. If you count tokens the answer you get is 1⁄3. If you count types the answer you get is 1⁄2 (did you need more math for that?). Similarly, you can design payouts where the right choice is 1⁄3 and payouts where the right choice is 1⁄2.
This was a helpful comment for me. What we’re dealing with is actually a special case of the type-token ambiguity: the tokens are actually indistinguishable. Say I flip a coin. I, If tails I put six red marbles in a bag which already contains three red marbles bag, if heads do nothing to the bag with three red marbles. I draw a marble and tell Beauty “red”. And then I ask Beauty her credence for the coin landing heads. I think that is basically isomorphic to the Sleep Beauty problem. In the original she is woken up twice if heads, but thats just like having more red marbles to choose from, the experiences are indistinguishable just like the marbles.
I don’t really think they are. That’s my major problem with the 1⁄3 answer. No one has ever shown me the unexpected experience Beauty must have to update from 0.5. But if you feel that way I’ll try other methods.
Off hand there is no reason to worry about types, as the possible answers to the questions “Do you have exactly two children?” and “Is one of them a boy born on a Tuesday?” are all distinguishable. But I haven’t thought really hard about that problem, maybe there is something I’m missing. My approach does suggest a reason for why the Self-Indication Assumption is wrong: the necessary features of an observer are indistinguishable. So it returns 0.5 for the Presumptuous Philosopher problem.
I’ll come back with an answer to (a). Bug me about it if I don’t. There is admittedly a problem which I haven’t worked out: I’m not sure how to relate the experience-type to the day of the week (time is a property of tokens). Basically, the type by itself doesn’t seem to tell us anything about the day (just like picking the red marble doesn’t tell us whether or not it was added after the coin flip. And maybe that’s a reason to reject my approach. I don’t know.
“No knowledge has been lost”?!?
Memories are knowledge—they are knowledge about past perceptions. They have been lost—because they have been chemically deleted by the amnesia-inducing drug. If they had not been lost, Beauty’s probability estimates would be very different at each interview—so evidently the lost information was important in influencing Beauty’s probability estimates.
That should be all you need to know to establish that the deletion of Beauty’s memories changes her priors, and thereby alters her subjective probability estimates. Beauty awakens, not knowing if she has previously been interviewed—because of her memory loss. She knew whether she had previously been interviewed at the start of the experiment—she hadn’t. So: that illustrates which memories have been deleted, and why her uncertainty has increased.
Yes. The memories have been lost (and the knowledge that accompanies them). The knowledge of what day of the week it is has not been lost because she never had this… as I’ve said four times. I’m just going to keep referring you back to my previous comments because I’ve addressed all this already.
You seem to have got stuck on this “day of the week” business :-(
The point is that beauty has lost knowledge that she once had—and that is why her priors change. That that knowledge is “what day of the week it currently is” seems like a fine way of thinking about what information beauty loses to me. However, it clearly bugs you—so try thinking about the lost knowledge another way: beauty starts off knowing with a high degree of certainty whether or not she has previously been interviewed—but then she loses this information as the experiment progresses—and that is why her priors change.
This example, like the last one, is indexed to a specific time. You don’t lose knowledge about conditions at t1 just because it is now t2 and the conditions are different.
Beauty loses information about whether she has previously attended interviews because her memories of them are chemically deleted by an amnesia-inducing drug—not because it is later on.
Makes sense to me.
Cool. Now I haven’t quite thought through all this so it’ll be a little vague. It isn’t anywhere close to being an analytic, formalized argument. I’m just going to dump a bunch of examples that invite intuitions. Basically the notion is: all information is type, not token. Consider, to begin with the Catcher in the Rye example. The sentence about the type was about the information contained in the book. This isn’t a coincidence. The most abundant source of types in the history of the world is pure information: not just every piece of text every written but every single computer program or file is a type (with it’s backups and copies as tokens). Our entire information-theoretic understanding of the universe involves this notion of writing the universe like a computer program (with the possibility of running multiple simulations), k-complexity is a fact about types not tokens (of course this is confusing since when we think of tokens we often attribute them the features of the their type, but the difference is there). Persons are types (at least in part, I think our concept of personhood confuses types and tokens). That’s why most people here think they could survive by being uploaded. When Dennett swtiches between his two brains it seems like there is only one person because there is only one person-type, though two person-tokens. I forget who it was, but someone here has argued in regard to decision theory, that we when we act we should take into account all the simulations of us that may some day be run and act for them as well. This is merely decision theory representing the fact that what matters about persons is the type.
So if agents are types, and in particular if information is types… well then they type experiences are what we update on, they’re the ones that contain information. There is no information to tokens beyond their type. RIght? Of course, this is just an intuition that needs to be formalized. But is the intuition clear?
I’m sorry this isn’t better formulated. The complexity justifies a top level post which I don’t have time for until next week.
Entertainingly, I feel justified in ignoring your argument and most of the others for the same reason you feel justified in ignoring other arguments.
I got into a discussion about the SB problem a month ago after Mallah mentioned it as related to the red door/blue doors problem. After a while I realized I could get either of 1⁄2 or 1⁄3 as an answer, despite my original intuition saying 1⁄2.
I confirmed both 1⁄2 and 1⁄3 were defensible by writing a computer program to count relative frequencies two different ways. Once I did that, I decided not to take seriously any claims that the answer had to be one or the other, since how could a simple argument overrule the result of both my simple arithmetic and a computer simulation?
I was thinking about that earlier.
A higher level of understanding of an initially mysterious question should translate into knowing why people may disagree, and still insist on answers that you yourself have discarded. You explain away their disagreement as an inferential distance.
Neither of the answers you have arrived at is correct, from my perspective, and I can explain why. So I feel justified in ignoring your argument for ignoring my argument. :)
That a simulation program should compute 1⁄2 for “how many times on average the coin comes up heads per time it is flipped” is simply P(x) in my formalization. It’s a correct but entirely uninteresting answer to something other than the problem’s question.
That your program should compute 1⁄3 for “how many times on average the coin comes up heads per time Beauty is awoken” is also a correct answer to a slightly more subtly mistaken question. If you look at the “Halfer variant” page of my spreadsheet, you will see a probability distribution that also correspond to the same “facts” that yield the 1⁄3 answer, and yet applying the laws of probability to that distribution give Beauty a credence of 1⁄2. The question your program computes an answer to is not the question “what is the marginal probability of x=Heads, conditioning on z=Woken”.
Whereas, from the tables representing the joint probability distribution, I think I now ought to be able to write a program which can recover either answer: the Thirder answer by inputting the “right” model or the Halfer answer by inputting the “wrong” model. In the Halfer model, we basically have to fail to sample on Heads/Tuesday. Commenting out one code line might be enough.
ETA: maybe not as simple as that, now that I have a first cut of the program written; we’d need to count awakenings on monday twice, which makes no sense at all. It does look as if our programs are in fact computing the same thing to get 1⁄3.
Which specific formulation of the Sleeping Beauty problem did you use to work things out? Maybe we’re referring to descriptions of the problem that use different wording; I’ve yet to read a description that’s convinced me that 1⁄2 is an answer to the wrong question. For example, here’s the wiki’s description asks
Personally, I believe that using the word ‘subjective’ doesn’t add anything here (it just sounds like a cue to think Bayesian-ishly to me, which doesn’t change the actual answer). So I read the question as asking for the probability of the coin landing tails given the experiment’s setup. As it’s asking for a probabiliy, I see it as wholly legitimate to answer it along the lines of ‘how many times on average the coin comes up heads per X,’ where X is one of the two choices you mentioned.
If you ignore the specification that it is Beauty’s subjective probability under discussion, the problem becomes ill-defined—and multiple answers become defensible—depending on whose perspective we take.
The word ‘subjective’ before the word ‘probability’ is empty verbiage to me, so (as I see it) it doesn’t matter whether you or I have subjectivity in mind. The problem’s ill-defined either way; ‘the specification that it is Beauty’s subjective probability’ makes no difference to me.
The perspective makes a difference:
“In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1⁄3. This is the correct answer from Beauty’s perspective. Yet to the experimenter the correct probability is 1⁄2.”
http://en.wikipedia.org/wiki/Sleeping_Beauty_problem
I think it’s not the change in perspective or subjective identity making a difference, but instead it’s a change in precisely which probability is being asked about. The Wikipedia page unhelpfully conflates the two changes.
It says that the experimenter must see a probability of 1⁄2 and Beauty must see a probability of 1⁄3, but that just ain’t so; there is nothing stopping Beauty from caring about the proportion of coin flips that turn out to be heads (which is 1⁄2), and there is nothing stopping the experimenter from caring about the proportion of wakings for which the coin is heads (which is 1⁄3). You can change which probability you care about without changing your subjective identity and vice versa.
Let’s say I’m Sleeping Beauty. I would interpret the question as being about my estimate of a probability (‘credence’) associated with a coin-flipping process. Having interpreted the question as being about that process, I would answer 1⁄2 - who I am would have nothing to do with the question’s correct answer, since who I am has no effect on the simple process of flipping a fair coin and I am given no new information after the coin flip about the coin’s state.
In the original problem post, Beauty is asked a specific question, though—namely:
“What is your credence now for the proposition that our coin landed heads?”
That’s fairly clearly the PROBABILITY NOW of the coin having landed heads—and not the PROPORTION that turn out AT SOME POINT IN THE FUTURE to have landed heads.
Perspective can make a difference—because different observers have different levels of knowledge about the situation. In this case, Beauty doesn’t know whether it is Tuesday or not—but she does know that if she is being asked on Tuesday, then the coin came down tails—and p(heads) is about 0.
It’s not specific enough. It only asks for Beauty’s credence of a coin landing heads—it doesn’t tell her to choose between the credence of a coin landing heads given that it is flipped and the credence of a coin landing heads given a single waking. The fact that it’s Beauty being asked does not, in and of itself, mean the question must be asking the latter probability. It is wholly reasonable for Beauty to interpret the question as being about a coin-flipping process for which the associated probability is 1⁄2.
The addition of the word ‘now’ doesn’t magically ban you from considering a probability as a limiting relative frequency.
Agree.
It’s not clear to me how this conditional can be informative from Beauty’s perspective, as she doesn’t know whether it’s Tuesday or not. The only new knowledge she gets is that she’s woken up; but she has an equal probability (i.e. 1) of getting evidence of waking up if the coin’s heads or if the coin’s tails. So Beauty has no more knowledge than she did on Sunday.
She has LESS knowledge than she had on Sunday in one critical area—because now she doesn’t know what day of the week it is. She may not have learned much—but she has definitely forgotten something—and forgetting things changes your estimates of their liklihood just as much as learning about them does.
That’s true.
I’m not as sure about this. It’s not clear to me how it changes the likelihoods if I sketch Beauty’s situation at time 1 and time 2 as
A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday. It is Sunday.
I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.
as opposed to just
A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday.
I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.
(Edit to clarify—the 2nd pair of statements is meant to represent roughly how I was thinking about the setup when writing my earlier comment. That is, it’s evident that I didn’t account for Beauty forgetting what day of the week it is in the way timtyler expected, but at the same time I don’t believe that made any material difference.)
I read it as “What is your credence”, which is supposed to be synonymous with “subjective probability”, which—and this is significant—I take to entail that Beauty must condition on having been woken (because she conditions on every piece of information known to her).
In other words, I take the question to be precisely “What is the probability you assign to the coin having come up heads, taking into account your uncertainty as to what day it is.”
Ahhhh, I think I understand a bit better now. Am I right in thinking that your objection is not that you disapprove of relative frequency arguments in themselves, but that you believe the wrong relative frequency/frequencies is/are being used?
Right up until your reply prompted me to write a program to check your argument, I wasn’t thinking in terms of relative frequencies at all, but in terms of probability distributions.
I haven’t learned the rules for relative frequencies yet (by which I mean thing like “(don’t) include counts of variables that have a correlation of 1 in your denominator”), so I really have no idea.
Here is my program—which by the way agrees with neq1′s comment here, insofar as the “magic trick” which will recover 1⁄2 as the answer consists of commenting out the TTW line.
However, this seems perfectly nonsensical when transposed to my spreadsheet: zeroing out the TTW cell at all means I end up with a total probability mass less than 1. So, I can’t accept at the moment that neq1′s suggestion accords with the laws of probability—I’d need to learn what changes to make to my table and why I should make them.
Replying again since I’ve now looked at the spreadsheet.
Using my intuition (which says the answer is 1⁄2), I would expect P(Heads, Tuesday, Not woken) + P(Tails, Tuesday, Not woken) > 0, since I know it’s possible for Beauty to not be woken on Tuesday. But the ‘halfer “variant”’ sheet says P(H, T, N) + P(T, T, N) = 0 + 0 = 0, so that sheet’s way of getting 1⁄2 must differ from how my intuition works.
(ETA—Unless I’m misunderstanding the spreadsheet, which is always possible.)
Yeah, that “Halfer variant” was my best attempt at making sense of the 1⁄2 answer, but it’s not very convincing even to me anymore.
That program is simple enough that you can easily compute expectations of your 8 counts analytically.
Your program looks good here, your code looks a lot like mine, and I ran it and got ~1/2 for P(H) and ~1/3 for F(H|W). I’ll try and compare to your spreadsheet.
Well, perhaps because relative frequencies aren’t always probabilities?
Of course. But if I simulate the experiment more and more times, the relative frequencies converge on the probabilities.
Even in the limit not all relative frequencies are probabilities. In fact, I’m quite sure that in the limit ntails/wakings is not a probability. That’s because you don’t have independent samples of wakings.
But if there is a probability to be found (and I think there is) the corresponding relative frequency converges on it almost surely in the limit.
I don’t understand.
I tried to explain it here: http://lesswrong.com/lw/28u/conditioning_on_observers/1zy8
Basically, the 2 wakings on tails should be thought of as one waking. You’re just counting the same thing twice. When you include counts of variables that have a correlation of 1 in your denominator, it’s not clear what you are getting back. The thirders are using a relative frequency that doesn’t converge to a probability
This is true if we want the ratio of tails to wakings. However...
Despite the perfect correlation between some of the variables, one can still get a probability back out—but it won’t be the probability one expects.
Maybe one day I decide I want to know the probability that a randomly selected household on my street has a TV. I print up a bunch of surveys and put them in people’s mailboxes. However, it turns out that because I am very absent-minded (and unlucky), I accidentally put two surveys in the mailboxes of people with a TV, and only one in the mailboxes of people without TVs. My neighbors, because they enjoy filling out surveys so much, dutifully fill out every survey and send them all back to me. Now the proportion of surveys that say ‘yes, I have a TV’ is not the probability I expected (the probability of a household having a TV) - but it is nonetheless a probability, just a different one (the probability of any given survey saying, ‘I have a TV’).
That’s a good example. There is a big difference though (it’s subtle). With sleeping beauty, the question is about her probability at a waking. At a waking, there are no duplicate surveys. The duplicates occur at the end.
That is a difference, but it seems independent from the point I intended the example to make. Namely, that a relative frequency can still represent a probability even if its denominator includes duplicates—it will just be a different probability (hence why one can get 1⁄3 instead of 1⁄2 for SB).
Ok, yes, sometimes relative frequencies with duplicates can be probabilities, I agree.
Morendil,
This is strange. It sounds like you have been making progress towards settling on an answer, after discussion with others. That would suggest to me that discussion can move us towards consensus.
I like your approach a lot. It’s the first time I’ve seen the thirder argument defended with actually probability statements. Personally, I think there shouldn’t be any probability mass on ‘not woken’, but that is something worth thinking about and discussing.
One thing that I think is odd. Thirders know she has nothing to update on when she is woken, because they admit she will give the same answer, regardless of if it’s heads or tails. If she really had new information that is correlated with the outcome, her credence would move towards heads when heads, and tails when tails.
Consider my cancer intuition pump example. Everyone starts out thinking there is a 50% chance they have cancer. Once woken, regarldess of if they have cancer or not, they all shift to 90%. Did they really learn anything about their disease state by being woken? If they did, those with cancer would have shifted their credence up a bit, and those without would have shifted down. That’s what updating is.
In your example the experimenter has learned whether you have cancer. And she reflects that knowledge in the structure of the experiment: you are woken up 9 times if you have the disease.
Set aside the amnesia effects of the drug for a moment, and consider the experimental setup as a contorted way of imparting the information to the patient. Then you’d agree that with full memory, the patient would have something to update on? As soon as the second day. So there is, normally, an information flow in this setup.
What the amnesia does is selectively impair the patient’s ability to condition on available information. it does that in a way which is clearly pathological, and results in the counter-intuitive reply to the question “conditioning on a) your having woken up and b) your inability to tell what day it is, what is your credence”? We have no everyday intuitions about the inferential consequences of amnesia.
Knowing about the amnesia, we can argue that Beauty “shouldn’t” condition on being woken up. But if she does, she’ll get that strange result. If she does have cancer, she is more likely to be woken up multiple times than once, and being woken up at all does have some evidential weight.
All this, though, being merely verbal aids as I try to wrap my head around the consequences of the math. And therefore to be taken more circumspectly than the math itself.
If she does condition on being woken up, I think she still gets 1⁄2. I hate to keep repeating arguments, but what she knows when she is woken up is that she has been woken up at least once. If you just apply Bayes rule, you get 1⁄2.
If conditioning causes her to change her probability, it should do so in such a way that makes her more accurate. But as we see in the cancer problem, people with cancer give the same answer as people without.
Yes, but then we wouldn’t be talking about her credence on an awakening. We’d be talking about her credence on first waking and second waking. We’d treat them separately. With amnesia, 2 wakings are the same as 1. It’s really just one experience.
Apply it to what terms?
I’m not sure what more I can say without starting to repeat myself, too. All I can say at this point, having formalized my reasoning as both a Python program and an analytical table giving out the full joint distribution, is “Where did I make a mistake?”
Where’s the bug in the Python code? How do I change my joint distribution?
I like the version of your halfer variant version of your table. I still need to think about your distributions more though. I’m not sure it makes sense to have a variable ‘woken that day’ for this problem.
Congratulations on getting to that point, I figure.