I’ve started at your latest post and recursively tried to find where you made a mistake
I think you’d benefit more if you read them in the right order starting from here.
Philosophers answer “Why not?” to the question of centered worlds because nothing breaks and we want to consider the questions of ‘when are we now?’ and ‘where are we now?’.
Sure, we want a lot of things. But apparently we can’t always have everything we want. To preserve the truth statements we need to follow the math wherever it leads and not push it where we would like it to go. And where the math goes—that what we should want.
Am I understanding you correctly that you reject P(today is Monday) as a valid probability in general (not just in sleeping beauty)?
This post refers several alternative problems where P(today is Monday) is a coherent probability, such as Single Awakening and No-Coin-Toss problems, which were introduced in the previous post. And here I explain the core principle: when there is only one day that is observed in the one run of the experiment you can coherently define what “today” means—the day from this iteration of the experiment. A random day. Monday xor Tuesday.
This is how wrong models try to treat Monday and Tuesday in Sleeping Beauty. As if they happen at random. But they do not. There is an order between them, and so they can’t be treated this way. Today can’t be Monday xor Tuesday, because on Tails both Monday and Tuesday do happen.
As a matter of fact, there is another situation where you can coherently talk about “today”, which I initially missed. “Today” can mean “any day”. So, for example, in Technicolor Sleeping beauty from the next post, you can have coherent expectation to see red with 50% and blue with 50% on the day of your awakening, because for every day it’s the same. But you still can’t talk about “probability that the coin is Heads today” because on Monday and Tuesday these probabilities are different.
So in practice, the limitation is only about Sleeping Beauty type problems where there are multiple awakenings with memory loss in between per one iteration of experiment, and no consistent probabilities for every awakening. But generally, I think it’s always helpful to understand what exactly you mean by “today” in any probability theory problem.
axiomatically deciding that 1⁄3 is the wrong probability for sleeping beauty
I do not decide anything axiomatically. But I notice that existent axioms of probability theory do not allow to have predictable update in favor of Tails in 100% of iterations of experiment, neither they allow a fair coin toss to have unconditional probability for Heads equal 1⁄3.
And then I notice that the justification that people came up with for such situations, about “new type of evidence” that a person receives is based on nothing but some philosopher wanting it to be this way. He didn’t come with any new math, didn’t prove any theorems. He simply didn’t immediately notice any contradictions in his reasoning. And when an example was broiught up, he simply doubled dowm/ Suffice to say, its absolutely not how anything supposed to work.
if everything else seems to work, is it not much simpler to accept that 1⁄3 is the correct answer and then you don’t have to give up considering whether today is Monday?
If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another.
You can also clearly see it in Statistical Analysis section of this post. I don’t see how this argument can be refuted, frankly. If you treat Tails&Monday and Tails&Tuesday as different elementary outcomes then you can’t possibly keep their correct order, and it’s in the definition of the experiment that on Tails, Monday awakening is always followed by the Tuesday awakening and that the Beauty is fully aware of it. Events that happen in sequence can’t be mutually exclusive and vice versa. I’m even formally proving it in the comments here.
And so, we can just accept that Tails&Monday and Tails&Tuesday are the same outcome of the probability space and suddenly everything adds up to normality. No paradox, no issues with statistical analysis, no suboptimal bets, no unjustified updates and no ungrounded philosophical handwaving. Seems like the best deal to me!
If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another.
I’ve read the relevant part of your previous post and I have an idea that might help.
Consider the following problem: “Forgetful Brandon”: Adam flips a coin and does NOT show it to Brandon, but shouts YAY! with 50% probability if the coin is HEADS (he does not shout if the coin is TAILS). (Brandon knows Adam’s behaviour). However, Brandon is forgetful and if Adam doesn’t shout he doesn’t do any Bayesian calculation and goes off to have an icecream instead.
Adam doesn’t shout. What should Brandon’s credence of HEADS be after this?
I hope you agree that Brandon not actually doing the Bayesian calculation is irrelevant to the question. We should still do the Bayesian calculation if we are curious about the correct probability. Anytime Brandon updates he predictably updates in the direction of HEADS, but again: do we care about this? should we point out a failure of conservation of expected evidence? Again, I say: NO: What evidence is actually updated on in the thought experiment isn’t relevant to the correct theoretical Bayesian calculation: we could also imagine a thought-experiment with a person who does bayesian calculations wrong every time, but to the correct credence that would still be irrelevant. If you agree, I don’t see why you object to Sleeping Beauty not doing the calculation in case she is not awakened. (Which is the only objection you wrote under the “Freqency Argument” model)
EDIT: I see later you refer back to another post supposedly addressing a related argument, however, as that would be the fifth step of my recursion I will postpone inspecting it to tomorrow, but obviously, you can’t give the same response to Forgetful Brandon as in this case Bradon does observe the non-shout, he just doesn’t update on it. You also declare that P(Awake|Heads) to not be 1⁄2 and give “Beauty is awakened on Heads no matter what” as reason. You often do this mistake in the text, but here it’s too important to not mention that “Awake” does not mean that “Beauty is awakened.”, it means that “Beauty is awake” (don’t forget that centeredness!) and, of course, Beauty is not awake if it is Tuesday and the coin is heads.
EDIT2: I’m also curious what you would say about the problem with the following modification (“Uninformed Sleeping Beauty”): Initially the full rules of the experiment are NOT explained to Beauty, only that she will have to sleep in the lab, she will get a drug on Monday night which will make her forget her day and that she may or may not be awakened on Monday/Tuesday.
However, when she awakens the full rules are explained to her, ie that she will not get awakened on Tuesday if the coin is HEADS.
Note that in this case you can’t object that the prior distribution gives non-zero probability to Tuesday&Heads as Beauty unquestionably has 1⁄4 credence in that before they explain the full rules to her.
EDIT3: Missed that Beauty might think it’s wednessday too in the previous case before being told the full rules, so let’s consider instead the following (“Misinformed Sleeping Beauty”): Initially the full rules of the experiment are NOT explained to Beauty, only that she will have to sleep in the lab and that she will get a drug on Monday night which will make her forget her day. Furthermore, she is told the falsehood that she will be awakened on Monday AND Tuesday whatever happens!
However, when she awakens the full rules are explained to her, ie that she won’t get/wouldn’t have gotten awakened on Tuesday if the coin is HEADS.
Note that in this case you can’t object that the prior distribution gives non-zero probability to Tuesday&Heads as Beauty unquestionably has 1⁄4 credence in that before they explain the actual rules to her.
I’ll start from adressing the actual crux of our disagreement
You often do this mistake in the text, but here it’s too important to not mention that “Awake” does not mean that “Beauty is awakened.”, it means that “Beauty is awake” (don’t forget that centeredness!) and, of course, Beauty is not awake if it is Tuesday and the coin is heads.
As I’ve written in this post, you can’t just said magical word “centredness” and think that you’ve solved the problem. If you wont a model that can have an event that changes its truth predicate with the passage of time during the same iteration of the probability experiment—you need to formally construct such model, rewriting all the probability theory from scratch, because our current probability theory doesn’t allow that.
In probability theory, one outcome of a sample space is realized per an iteration of experiment. And so for this iteration of experiment, every event which includes this outcome is considered True. All the “centred” models therefore, behave as if Sleeping Beauty consist of two outcomes of probability experiment. As if Monday and Tuesday happen at random and that to determine whether the Beauty has another awakening the coin is tossed anew. And because of it they contradict the conditions of the experiment, according to which Tails&Tuesday awakening always happen after Tails&Monday. Which is shown in Statistical Analysis section. It’s a model for random awakening not for current awakening that. Because current awakening is not random.
So no, I do not do this mistake in the text. This is the correct way to talk about Sleeping Beauty. Event “The Beauty is awaken in this experement” is properly defined. Event “The Beauty is awake on this particular day” is not, unless you find some new clever way to do it—feel free to try.
Consider the following problem: “Forgetful Brandon”
I must say, this problem is very unhelpful to this discussion. But sure, lets analyze it regardless.
I hope you agree that Brandon not actually doing the Bayesian calculation is irrelevant to the question.
I suppose? Such questions are usually about ideal rational agents, so yes, it shouldn’t matter, what a specific non-ideal agent does, but then why even add this extra complication to the question if it’s irrelevant?
Anytime Brandon updates he predictably updates in the direction of HEADS
Well, that’s his problem, honestly, I though we agreed that what he does is irrelevant to the question.
Also his behavior here is not as bad as what you want the Beauty to do—at least Brandon doesn’t update in favor of Heads on literally every iteration of experiment.
should we point out a failure of conservation of expected evidence?
I mean, if we want to explain Brandon’s failure at rationality—we should. The reason why Brian’s behaviour is not rational is exactly that—he fails at conservation of expected evidence. There are two possible signals that he may receive: “Yay”, “No yay and getting ice cream”. These signals are differently correclated with the outcome of the coin toss. If he behaved rationally he updated on both of them in opposite direction, therefore following the conservation of expected evidence.
In principle, it’s possible to construct a better example where Brandon doesn’t update not because of his personal flaws in rationality, but due to the specifics of the experiment. For example, if he couldn’t be sure when exactly Adam is supposed to shout. Say, Adam intended to shout one minute after he saw the result of the coin toss, but Brandon doesn’t knows it, according to his information Adam shouts “Yay” in the interval of three minutes sicnce the coin was tossed. And so he is still waiting, unupdated aftre just one minute.
But then, it won’t be irrelevant to the question as you seem to want it for some reason.
I don’t see why you object to Sleeping Beauty not doing the calculation in case she is not awakened. (Which is the only objection you wrote under the “Freqency Argument” model)
I do not object to the fact that the Beauty doesn’t do calculation in case she is not awakened—she literally can’t do it due to the setting of the experiment.
I object to Beauty predictably updating in favor of Tails when she awakens in every iteration of the experiment which is a blatant contradiction of conservation of expected evidence. Updating model, as a whole descrives Observer Sleeping Beauty problem, where the observer can legitimately not see that the Beauty is awake and therefore update on awakening is lawful
Which is the only objection you wrote under the “Freqency Argument” model
See also Towards the Correct Model where I point to core mathematical flaw of Frequency Argument—ignoring the fact that it works only when P(Heads|Awake) = 1⁄2 which is wrong for Sleeping Beauty. And, of course, Updating Model fails the Statistical Analysis as every other “centred” model.
Uninformed Sleeping Beauty
When the Beauty doesn’t know the actual setting of the experiment she has a different model, fitting her uninformed state of knowledge, when she is told what is actually going on she discards it and starts using the correct model from this post.
Metapoint: You write a lot of things in your comments with which I usually disagree, however, I think faster replies are more useful in these kind of conversations than complete replies, so at first, I’m only going to reply to points I consider the most important at the time. If you disagree and believe writing complete replies is more useful, do note (however, my experience for that case is that after a while, instead of writing a comment containing a reply to the list of points the other party brought up, I simply drop out of the conversation and I can’t guarantee that this won’t happen here)
My whole previous comment was meant to address the part of your comment I quoted. Here it is again:
If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another.
With my previous comment I meant to show you that if you don’t start out with “centered worlds don’t work”, you CAN make it work (very important: here, I haven’t yet said that this is how it works or how it ought to work, merely that it CAN work without some axiom of probability getting hurt).
Still, I struggle to see what your objection is apart form your intuition that “NO! It can’t work!”
When the Beauty doesn’t know the actual setting of the experiment she has a different model, fitting her uninformed state of knowledge, when she is told what is actually going on she discards it and starts using the correct model from this post.
Again, I understand that in the theory you built up this is how it would work, that’s not what i want to argue (yet). I want to argue how it CAN work in another way with credences/centeredness/bayesianism. To counterargue you would have to show that NO, it can’t work that way. You would have to show that for some reason because of some axiom of probability or sth, we can’t model Beauty’s credences with probability the moment they learn the relevant info after waking up.
In probability theory, one outcome of a sample space is realized per an iteration of experiment.
Discard the concept of experiment as it might confuse you. If you want to understand how centered world/credence/bayesian epistemology works (to then see that it DOES work), experiment isn’t a good word, because it might lock you into a third-person view, where of course, centeredness does not work (of course, after you understood that bayesianism CAN work, we can reintroduce the word with some nuance).
Your statistical analysis is of course also assumes the third-person/not centered view, so of course it won’t help you, but again, we should first talk about whether centeredness CAN work or not. Assuming that it can’t and deriving stuff from that does not prove that it can’t work.
So no, I do not do this mistake in the text. This is the correct way to talk about Sleeping Beauty. Event “The Beauty is awaken in this experement” is properly defined. Event “The Beauty is awake on this particular day” is not, unless you find some new clever way to do it—feel free to try.
The clever way isn’t that clever to be honest. It’s literally just: don’t assume that it does not work and try it.
I meant to show you that if you don’t start out with “centered worlds don’t work”, you CAN make it work
The clever way isn’t that clever to be honest. It’s literally just: don’t assume that it does not work and try it.
I didn’t start believing that “centred worlds don’t work”. I suspect you got this impression mostly because you were reading the posts in the wrong order. I started from trying the existent models noticed when they behave weirdly if we assume that they are describing Sleeping Beauty and then noticed that they are actually talking about different problems—for which their behavior is completely normal.
And then, while trying to understand what is going on, I stumbled at the notion of centred possible worlds and their complete lack of mathematical justification and it opened my eyes. And then I was immediately able to construct the correct model, which completely resolves the paradox, adds up to normality and has no issues whatsoever.
But in hindsight, if I did start from the assumption that centred possible worlds do not work, - that would be the smart thing to do and I’d save me a lot of time.
With my previous comment I meant to show you that if you don’t start out with “centered worlds don’t work”, you CAN make it work (very important: here, I haven’t yet said that this is how it works or how it ought to work, merely that it CAN work without some axiom of probability getting hurt).
Well, you didn’t. All this time you’ve just been insisting on a privileged treatment for them: “Can work until proven otherwise”. Now, that’s not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties. I’m more than willing to listen to such attempts. The problem is—there are none. People just seem to think that saying “first person perspective” allows them to build sample space from non-mutually exclusive outcomes.
Still, I struggle to see what your objection is apart form your intuition that “NO! It can’t work!”
It’s like you didn’t even read my posts or my comments.
By definition of a sample space it can be constructed only from elementary outcomes which has to be mutually exclusive. Tails&Monday and Tails&Tuesday are not mutually exclusive—they happen to the same person in the same iteration of probability experiment during the same outcome of the coin toss. “Centredness” framework attempts to treat them as elementary outcomes, regardless. Therefore, it contradicts the definition of a sample space.
This is what statistical analysis clearly demonstrates. If a mathematical probabilistic model fits some real world process—then the outcomes it produces has to have the same statistical properties as the outcomes of real world process. All “centred” models produce outcomes with different properties, compared to what actually running Sleeping Beauty experiment would do. Therefore they do not correctly fit the Sleeping Beauty experiment.
I want to argue how it CAN work in another way with credences/centeredness/bayesianism.
If you want to understand how centered world/credence/bayesian epistemology works
Don’t mix bayesianism and credences with this “centredness” nonsense. Bayesianism is not in trouble—I’ve been appealing to Bayes theorem a lot throughout my posts and it’s been working just fine. Likewise, credence in the event is simply probability conditional on all the evidence—I’m exploring all manner of conditional probabilities in my model. Bayesianism and credences are not some “another way” It is the exact same way. It’s probability theory. “Centredness”—is not.
experiment isn’t a good word, because it might lock you into a third-person view
Your statistical analysis is of course also assumes the third-person
I don’t understand what you mean by “third-person view” here, and I suspect neither do you.
Statistical test is very much about Beauty’s perspective—only awakenings that she experiences are noted down, not all the states of the experiment. Heads&Tuesday isn’t added to the list, which would be the case if we were talking about third person perspective.
On the other hand, when you were talking about justifying an update on awakening, you are treating the situation from the observer perspective—someone who has non zero probability for Heads&Tuesday outcome and could realistically not observe the Beauty being awakened and, therefore, updates when sees her indeed awaken.
“Centred” models do not try to talk about Beauty’s perspective. They are treating different awakened states of the Beauty as if they are different people, existing independently of each other, therefore contradicting the conditions of the setting, according to which all the awakenings are happening to the same person. Unless, of course, there is some justification why treating Beauty’s awakened states this way is acceptable. The only thing resembling such justification, that I’ve encountered, is vaguely pointing towards the amnesia that the Beauty is experiencing, with which I deal in the section Effects of Amnesia. If there is something else—I’m open to consider it, but the initial burden of proof is on the “centredness” enthusiasts.
Now, that’s not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties.
This whole conversation isn’t about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple language of the given formal system they are working in, they are just simply so much of an expert that they can transmit and receive the given information more efficiently by speaking on a higher level of abstraction.
It is not possible to translate the conversation that we’re having to a simple formal system as it’s about how we should/can model some aspect of reality (which is famously dirty and complicated) with some specific mathematical object.
To be more concrete: I want to show you that we can model (and later that we should indeed) a person’s beliefs at some given point in time with probability spaces.
This is inherently a philosophical and not a mathematical problem and I don’t see how you don’t understand this concept and would appreciate if you could elaborate on this point as much as possible.
You keep insisting that
By definition of a sample space it can be constructed only from elementary outcomes which has to be mutually exclusive. Tails&Monday and Tails&Tuesday are not mutually exclusive—they happen to the same person in the same iteration of probability experiment during the same outcome of the coin toss. “Centredness” framework attempts to treat them as elementary outcomes, regardless. Therefore, it contradicts the definition of a sample space.
If we are being maximally precise, then NO: the math of probability spaces prescribes a few formal statements which (this is very important), in some cases, can be used to model experiments and events happening or not happening in reality, but the mathematical objects itself have no concept of ‘experiment’ or ‘time’ or anything like those. I won’t copy it here, but you can look these up on the net yourself, if you want: here is one such source. Don’t be confused by the wiki sometimes using English words, rest assured, any mathematician could translate it to any sufficiently expressive, simple formal system using variable names like a1,x3564789, etc.. (If you really think it would help you and you don’t believe what I’m saying otherwise, I can translate it to first order logic for you)
Now that we hopefully cleared up that we are not arguing about math, it’s time for more interesting parts:
Can a probability space model a person’s beliefs at a certain point in time?
Yes, it can!
First, I would like to show you that your solution does NOT model a person’s belief at a certain time:
People have certain credences in the statement “Today is Monday.”
Do note that the above statement is fully about reality and not about math in any way and so it leans on our knowledge about humans and their minds.
You can test it in various ways: eg. asking people “hey, sorry to bother you, is today Monday?”, setting up an icecream stand which is only open on Monday in one direction from the lab, another in the opposite direction which is only open on Tuesday and making this fact known to subjects of an experiment who are then asked to give you icecream and observe where the go, etc..
In particular, Beauty, when awoken, has a certain credence in the statement “Today is Monday.”
This follows from 1.
Your model does not model Beauty’s credences in the statement “Today is Monday”.
You can see this various ways, and your model is pretty weird, but because I believe you will agree with this, I won’t elaborate here, unless asked later.
Therefore, your solution does NOT model a person’s belief at a certain time.
This follows from 2 and 3.
Before I go further, I think I will ask you whether everything is clear and whether you agree with everything I wrote so far.
This whole conversation isn’t about math. It is about philosophy.
The tragedy of the whole situation is that people keep thinking that.
Everything is “about philosophy” until you find a better way to formalize it. Here we have a better way to formalize the issue, which you keep ignoring. Let me spell it for you once more:
If a mathematical probabilistic model fits some real world process—then the outcomes it produces has to have the same statistical properties as the outcomes of real world process.
If we agree on this philosophical statement, then we reduced the disagreement to a mathematical question, which I’ve already resolved in the post. If you disagree, then bring up some kind of philosophical argument which we will be able to explore.
If you are a layman
I’m not. And frankly, it baffles me that you think that you need to explain that it’s possible to talk about math using natural language, to a person who has been doing it for multiple posts in a row.
mathematical objects itself have no concept of ‘experiment’ or ‘time’ or anything like those.
The more I post about anthropics the clearer it becomes that I should’ve started with posting about probability theory 101. My naive hopes that average LessWrong reader is well familiar with the basics and just confused about more complicated cases are crushed beyond salvation.
Can a probability space model a person’s beliefs at a certain point in time?
This question is vague in a similar manner to what I’ve seen from Lewis’s paper. Let’s specify it, so that we both understand what we are talking about
Did you mean to ask 1. or 2:
Can a probability space at all model some person’s belif in some circumstance at some specific point in time?
Can a probability space always model any person’s belief in any circumstances at any unspecified point in time?
The way I understand it, we agree on 1. but disagree on 2. There are definetely situations where you can correctly model uncertanity about time via probability theory. As a matter of fact, it’s most of the cases. You won’t be able to resolve our disagreement by pointing to such situations—we agree on them.
But you seem to have generalized that it means that probability theory always has to be able to do it. And I disagree. Probability space can model only aspects of reality that can be expressed in terms of it. If you want to express uncertanity between “today is Monday” or “today is Tuesday” you need a probability space for which Monday and Tuesday are mutually exclusive outcomes and it’s possible to design a specific setting—like the one in Sleeping Beauty—where they are not, where on the same trial both Monday and Tuesday are realized and the participant is well aware of it.
In particular, Beauty, when awoken, has a certain credence in the statement “Today is Monday.”
No she does not. And it’s easy to see if you actually try to formally specify what is meant here by “today” and what is meant by “today” in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment.
Two awakenings with memory loss, regardless of the outcome of the coin.
Regular Sleeping Beauty
Your goal is to formally define “today” using first order logic so that a person participating in such experiments could coherently talk about event “today the coin is Heads”.
My claim is: it’s very easy to do so in 1. It’s a harder, but still doable in 2. And it’s not possible to do so in 3, without contradicting the math of probability theory.
setting up an icecream stand which is only open on Monday in one direction from the lab, another in the opposite direction which is only open on Tuesday and making this fact known to subjects of an experiment who are then asked to give you icecream and observe where the go
This is not a question about simply probability/credence. It also involves utilities and it’s implicitly assumed that the participant preferes to walk for less distance than more. Essentially you propose a betting scheme where:
P(Monday)U(Monday) = P(Tuesday)U(Tuesday)
According to my model P(Monday) = 1, P(Tuesday) = 1⁄2, so:
2U(Monday) = U(Tuesday), therefore odds are 2:1. As you see, it deals with such situations without any problem.
No she does not. And it’s easy to see if you actually try to formally specify what is meant here by “today” and what is meant by “today” in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment.
I said that I can translate the math of probability spaces to first order logic, and I explicitly said that our conversation can NOT be translated to first order logic as proof that it is not about math, rather, it’s about philosophy. Please, reread that part of my previous comment.
And frankly, it baffles me that you think that you need to explain that it’s possible to talk about math using natural language, to a person who has been doing it for multiple posts in a row.
That is not what I explained and I suggest you reread that part. Here it is again:
This whole conversation isn’t about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple language of the given formal system they are working in, they are just simply so much of an expert that they can transmit and receive the given information more efficiently by speaking on a higher level of abstraction.
It is not possible to translate the conversation that we’re having to a simple formal system as it’s about how we should/can model some aspect of reality (which is famously dirty and complicated) with some specific mathematical object.
The structure of my argument here is the following:
Math is about concepts in formal systems, therefore an argument about math can be expressed in some simple, formal language
We are having an argument which can’t be translated to a formal system.
Therefore, we are not arguing about math.
The more I post about anthropics the clearer it becomes that I should’ve started with posting about probability theory 101. My naive hopes that average LessWrong reader is well familiar with the basics and just confused about more complicated cases are crushed beyond salvation.
Ah yes, clearly, the problem is that I don’t understand basic probability theory. (I’m a bit sad that this conversation happened to take place with my pseudonymous account.) In my previous comment, I explicitily prepared to preempt your confusion about seeing the English word ‘experiment’ with my paragraph (the part of it that you, for some reason, did not quote), and specifically linking a wiki which only contains the mathematical part of ‘probability’, and not philosophical interpretations that are paired with it commonly, but alas, it didn’t matter.
>In particular, Beauty, when awoken, has a certain credence in the statement “Today is Monday.”
No she does not. And it’s easy to see if you actually try to formally specify what is meant here by “today” and what is meant by “today” in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment.
If you are not ready to accept that people have various levels of belief in the statement “Today is Monday” at all times, then I don’t think this conversation can go anywhere, to be honest. This is an extremely basic fact about reality.
EDIT: gears, in the first part you selected i″m answering an accusation of bluffing in a matter-of-fact way, how is that too combative? Also, fell free to chime in at any point it is an open forum after all..
Meta: the notion of writing probability 101 wasn’t addressed to you specifically. It was a release of my accumulated frustration of not-particularly productive arguments with several different people which again and again led to the realizations that the crux of disagreement lies in the most basics, from which you are only one person.
You are confusing to talk to, with your manner to rise seemingly unrelated points and then immediately drop them. And yet you didn’t deserve the full emotional blow that you apparently received and I’m sorry about it.
Writing a probability 101 seems to me as a constructive solution to such situations, anyway. It would provide opportunity to resolve this kinds of disagreements as soon as they arise, instead of having to backtrack to them from a very specific topic. I may still add it to my todo list.
Ah yes, clearly, the problem is that I don’t understand basic probability theory. (I’m a bit sad that this conversation happened to take place with my pseudonymous account.) In my previous comment, I explicitily prepared to preempt your confusion about seeing the English word ‘experiment’ with my paragraph (the part of it that you, for some reason, did not quote), and specifically linking a wiki which only contains the mathematical part of ‘probability’, and not philosophical interpretations that are paired with it commonly, but alas, it didn’t matter.
i figured that either you don’t know what “probability experiment” is or you are being confusing on purpose. I prefer to err in the direction of good faith, so the former was my initial hypothesis.
Now, considering that you admit that you you were perfectly aware of what I was talking about, to the point where you specifically tried to cherry pick around it, the latter became more likely. Please don’t do it anymore. Communication is hard as it is. If you know what a well established thing is, but believe it’s wrong—just say so.
Nevertheless, from this exchange, I believe, I now understand that you think that “probability experiment” isn’t a mathematical concept, but a philosophical one. I could just accept this for the sake of the argument, and we would be in a situation where we have a philosophical consensus about an issue, to a point where it’s a part of standard probability theory course that is taught to students, and you are trying to argue against it, which would put quite some burden of proof on your shoulders.
But, as a matter of fact, I don’t see anything preventing us from formally defining “probability experiment”. We already have a probability space. Now we just need a variable going from 1 to infinity for the iteration of probability experiment, and a function which takes sample space and the value of this variable as an input and returns one outcome that is realized in this particular iteration.
I said that I can translate the math of probability spaces to first order logic, and I explicitly said that our conversation can NOT be translated to first order logic as proof that it is not about math
Sorry, I misunderstood you.
Also a reminder that you you still haven’t addressed this:
If a mathematical probabilistic model fits some real world process—then the outcomes it produces has to have the same statistical properties as the outcomes of real world process.
If we agree on this philosophical statement, then we reduced the disagreement to a mathematical question, which I’ve already resolved in the post. If you disagree, then bring up some kind of philosophical argument which we will be able to explore.
Anyway, are you claiming that it’s impossible to formalize what “today” in “today the coin is Heads” means even in No-Coin-Toss problem? Why are you so certain that people have to have credence in this statement then? Would you then be proven wrong if I indeed formally specify what “Today” means?
Now, do you see, why this method doesn’t work for Two Awakenings Either Way and Sleeping Beauty problems?
If you are not ready to accept that people have various levels of belief in the statement “Today is Monday” at all times, then I don’t think this conversation can go anywhere, to be honest. This is an extremely basic fact about reality.
In reality people may have all kind of confused beliefs and ill-defined concepts in their heads. But the question of Sleeping Beauty problem is about what the ideal rational agent is supposed to believe. When I say “Beauty does not have such credence” I mean, that an ideal rational agent ought not to. That probability of such event is ill-defined.
As you may’ve noticed I’ve successfully explained the difference in real life beliefs about optimal actions in the ice-cream stand scenario, without using such ill-defined probabilities.
I hope it’s okay if I chime in (or butt in). I’ve been vaguely trying to follow along with this series, albeit without trying too hard to think through whether I agree or disagree with the math. This is the first time that what you’ve written has caused to go “what?!?”
First of all, that can’t possibly be right. Second of all, it goes against everything you’ve been saying for the entire series. Or maybe I’m misunderstanding what you meant. Let me try rephrasing.
(One meta note on this whole series that makes it hard for me to follow sometimes: you use abbreviations like “Monday” as shorthand for “a Monday awakening happens” and expect people to mentally keep track that this is definitely not shorthand for “today is Monday” … I can barely keep track of whether heads means one awakening or two… maybe should have labeled the two sides of the coin ONE and TWO instead is heads and tails)
Suppose someone who has never heard of the experiment happens to call sleeping beauty on her cell phone during the experiment and ask her “hey, my watch died and now I don’t know what day it is; could you tell me whether today is Monday or Tuesday?”
(This is probably a breach of protocol and they should have confiscated her phone until the end, but let’s ignore that.).
Are you saying that she has no good way to reason mathematically about that question? Suppose they told her “I’ll pay you a hundred bucks if it turns out you’re right, and it costs you nothing to be wrong, please just give me your best guess”. Are you saying there’s no way for her to make a good guess? If you’re not saying that, then since probabilities are more basic than utilities, shouldn’t she also have a credence?
In fact, let’s try a somewhat ad-hoc and mostly unprincipled way to formalize this. Let’s say there’s a one percent chance per day that her friend forgets what day it is and decides to call her to ask. (One percent sounds like a lot but her friend is pretty weird) Then there’s a 2% chance of it happening if there are two awakenings, and one percent if there’s only one awakening. If there are two awakenings then Monday and Tuesday are equally likely; if there’s only one awakening then it’s definitely Monday. Thus, given that her friend is on the phone, today is more likely to be Monday than Tuesday.
Okay, maybe that’s cheating… I sneaked in a Rare Event. Suppose we make it more common? Suppose her friend forgets what day it is 10% off the time. The logic still goes through: given that her friend is calling, today is more likely to be Monday than Tuesday.
Okay, 10% is still too rare. Let’s try 100%. This seems a bit confusing now. From her friends perspective, Monday is just as good as Tuesday for coming down with amnesia. But from sleeping beauty’s perspective, GIVEN THAT the experiment is not over yet, today is more likely to be Monday than Tuesday. This is true even though she might be woken up both days.
I understand that it all may be somewhat counterintuitive. I’ll try to answer whatever questions you have. If you think you have some way to formally define what “Today” means in Sleeping Beauty—feel free to try.
Second of all, it goes against everything you’ve been saying for the entire series.
Seems very much in accordance with what I’ve been saying.
Throughout the series I keep repeating the point that all we need to solve anthropics is to follow probability theory where it leads and then there will be no paradoxes. This is exactly what I’m doing here. There is no formal way to define “Today is Monday” in Sleeping Beauty and so I simply accept this, as the math tells me to, and then the “paradox” immediately resolves.
Suppose someone who has never heard of the experiment happens to call sleeping beauty on her cell phone during the experiment and ask her “hey, my watch died and now I don’t know what day it is; could you tell me whether today is Monday or Tuesday?” (This is probably a breach of protocol and they should have confiscated her phone until the end, but let’s ignore that.).
Are you saying that she has no good way to reason mathematically about that question? Suppose they told her “I’ll pay you a hundred bucks if it turns out you’re right, and it costs you nothing to be wrong, please just give me your best guess”. Are you saying there’s no way for her to make a good guess? If you’re not saying that, then since probabilities are more basic than utilities, shouldn’t she also have a credence?
Good question. First of all, as we are talking about betting I recommend you read the next post, where I explore it in more details, especially if you are not fluent in expected utility calculations.
Secondly, we can’t ignore the breach of the protocol. You see, if anything breaks the symmetry between awakening, the experiment changes in a substantial manner. See Rare Event Sleeping Beauty, where probability that the coin is Heads can actually be 1⁄3.
But we can make a similar situation without breaking the symmetry. Suppose that on every awakening a researcher comes to the room and proposes the Beauty to bet on which day it currently is. At which odds should the Beauty take the bet?
This is essentially the same betting scheme as ice-cream stand, which I deal with in the end of the previous comment.
I tried to formalize the three cases you list in the previous comment. The first one was indeed easy. The second one looks “obvious” from symmetry considerations but actually formalizing seems harder than expected. I don’t know how to do it. I don’t yet see why the second should be possible while the third is impossible.
The second one looks “obvious” from symmetry considerations but actually formalizing seems harder than expected.
Exactly! I’m glad that you actually engaged with the problem.
The first step is to realize that here “today” can’t mean “Monday xor Tuesday” because such event never happens. On every iteration of experiment both Monday and Tuesday are realized. So we can’t say that the participant knows that they are awakened on Monday xor Tuesday.
Can we say that participant knows that they are awakened on Monday or Tuesday? Sure. As a matter of fact:
P(Monday or Tuesday) = 1
P(Heads|Monday or Tuesday) = P(Heads) = 1⁄2
This works, here probability that the coin is Heads in this iteration of the experiment happens to be the same as what our intuition is telling us P(Heads|Today) is supposed to be, however we still can’t define “Today is Monday”:
P(Monday|Monday or Tuesday) = P(Monday) = 1
Which doesn’t fit our intuition.
How can this be? How can we have a seeminglly well-defined probability for “Today the coin is Heads” but not for “Today is Monday”? Either “Today” is well-defined or it’s not, right? Take some time to think about it.
What do we actually mean when we say that on an awakening the participant supposed to believe that the coin is Heads with 50% probability? Is it really about this day in particular? Or is it about something else?
The answer is: we actually mean, that on any day of the experiment be it Monday or Tuesday the participant is supposed to believe that the coin is Heads with 50% probability. We can not formally specify “Today” in this problem but there is a clever, almost cheating way to specify “Anyday” without breaking anything.
This is not easy. It requires a way to define P(A|B), when P(B) itself is undefined which is unconventional. But, moreover, it requires symmetry. P(Heads|Monday) has to be equal to P(Heads|Tuesday) only then we have a coherent P(Heads|Anyday).
This makes me uncomfortable. From the perspective of sleeping beauty, who just woke up, the statement “today is Monday” is either true or false (she just doesn’t know which one). Yet you claim she can’t meaningfully assign it a probability. This feels wrong, and yet, if I try to claim that the probability is, say, 2⁄3, then you will ask me “in what sample space?” and I don’t know the answer.
What seems clear is that the sample space is not the usual sleeping beauty sample space; it has to run metaphorically “skew” to it somehow.
If the question were “did the coin land on heads” then it’s clear that this is question is of the form “what world am I in?”. Namely, “am I in a world where the coin landed on heads, or not?”
Likewise if we ask “does a Tuesday awakening happen?”… that maps easily to question about the coin, so it’s safe.
But there should be a way to ask about today as well, I think. Let’s try something naive first and see where it breaks.
P(today is Monday | heads) = 100% is fine.
(Or is that tails? I keep forgetting.)
P(today is Monday | tails) = 50% is fine too.
(Or maybe it’s not? Maybe this is where I’m going working? Needs a bit of work but I suspect I could formalize that one if I had to.)
But if those are both fine, we should be able to combine them, like so:
heads and tails are mutually exclusive and one of them must happen, so:
P(today is Monday) =
P(heads) • P(today is Monday | heads) +
P(tails) • P(today is Monday | tails) =
0.5 + .25 = 0.75
Okay, I was expecting to get 2⁄3 here. Odd. More to the point, this felt like cheating and I can’t put my finger on why.
maybe need to think more later
This makes me uncomfortable. From the perspective of sleeping beauty, who just woke up, the statement “today is Monday” is either true or false (she just doesn’t know which one). Yet you claim she can’t meaningfully assign it a probability. This feels wrong, and yet, if I try to claim that the probability is, say, 2⁄3, then you will ask me “in what sample space?” and I don’t know the answer.
Where does the feeling of wrongness come from? Were you under impression that probability theory promised us to always assign some measure to any statement in natural language? It just so happens that most of the time we can construct an appropriate probability space. But the actual rule is about whether or not we can construct a probability space, not whether or not something is a statement in natural language.
Is it really so surprising that a person who is experiencing amnesia and the repetetion of the same experience, while being fully aware of the procedure can’t meaningfully assign credence to “this is the first time I have this experience”? Don’t you think there has to be some kind of problems with Beauty’s knowledge state? The situation whre due to memory erasure the Beauty loses the ability to coherently reason about some statements makes much more sense than the alternative proposed by thirdism—according to which the Beauty becomes more confident in the state of the coin than she would’ve been if she didn’t have her memory erased.
Another intuition pump is that “today is Monday” is not actually True xor False from the perspective of the Beauty. From her perspective it’s True xor (True and False). This is because on Tails, the Beauty isn’t reasoning just for some one awakening—she is reasoning for both of them at the same time. When she awakens the first time the statement “today is Monday” is True, and when she awakens the second time the same statement is False. So the statement “today is Monday” doesn’t have stable truth value throughout the whole iteration of probability experiment. Suppose that Beauty really does not want to make false statements. Deciding to say outloud “Today is Monday”, leads to making a false statement in 100% of iterations of experiemnt when the coin is Tails.
P(today is Monday | heads) = 100% is fine. (Or is that tails? I keep forgetting.) P(today is Monday | tails) = 50% is fine too. (Or maybe it’s not? Maybe this is where I’m going working? Needs a bit of work but I suspect I could formalize that one if I had to.) But if those are both fine, we should be able to combine them, like so: heads and tails are mutually exclusive and one of them must happen, so: P(today is Monday) = P(heads) • P(today is Monday | heads) + P(tails) • P(today is Monday | tails) = 0.5 + .25 = 0.75 Okay, I was expecting to get 2⁄3 here. Odd. More to the point, this felt like cheating and I can’t put my finger on why. maybe need to think more later
Here you are describing Lewis’s model which is appropriate for Single Awakening Problem. There the Beauty is awakened on Monday if the coin is Heads, and if the coin is Tails, she is awakened either on Monday or on Tuesday (not both). It’s easy to see that 75% of awakening in such experiment indeed happen on Monday.
It’s very good that you notice this feeling of cheating. This is a very important virtue. This is what helped me construct the correct model and solve the problem in the first place—I couldn’t accept any other—they all were somewhat off.
I think, you feel this way, because you’ve started solving the problem from the wrong end, started arguing with math, instead of accepting it. You noticed that you can’t define “Today is Monday” normally so you just assumed as an axiom that you can.
But as soon as you assume that “Today is Monday” is a coherent event with a stable truth value throughout the experiment, you inevitably start talking about a different problem, where it’s indeed the case. Where there is only one awakening in any iteration of probability experiment and so you can formally construct a sample space where “Today is Monday” is an elementary mutually exclusive outcome. There is no way around it. Either you model the problem as it is, and then “Today is Monday” is not a coherent event, or you assume that it is coherent and then you are modelling some other problem.
Ah, so I’ve reinvented the Lewis model. And I suppose that means I’ve inherited its problem where being told that today is Monday makes me think the coin is most likely heads. Oops. And I was just about to claim that there are no contradictions. Sigh.
Okay, I’m starting to understand your claim. To assign a number to P(today is Monday) we basically have two choices. We could just Make Stuff Up and say that it’s 53% or whatever. Or we could at least attempt to do Actual Math. And if our attempt at actual math is coherent enough, then there’s an implicit probability model lurking there, which we can then try to reverse engineer, similar to how you found the Lewis model lurking just beneath the surface of my attempt at math. And once the model is in hand, we can start deriving consequences from it, and Io and behold, before long we have a contradiction, like the Lewis model claiming we can predict the result of a coin flip that hasn’t even happened yet just because we know today is Monday.
And I see now why I personally find the Lewis model so tempting… I was trying to find “small” perturbations of the experiment where “today is Monday” clearly has a well defined probability. But I kept trying use Rare Events to do it, and these change the problem even if the Rare Event is not Observed. (Like, “supposing that my house gets hit by a tornado tomorrow, what is the probability that today is Monday” is fine. Come to think of it, that doesn’t follow Lewis model. Whatever, it’s still fine.)
As for why I find this uncomfortable: I knew that not any string of English words gets a probability, but I was naïve enough to think that all statements that are either true or false get one. And in particular I was hoping they this sequence of posts which kept saying “don’t worry about anthropics, just be careful with the basics and you’ll get the right answer” would show how to answer all possible variations of these “sleep study” questions… instead it turns out that it answers half the questions (the half that ask about the coin) while the other half is shown to be hopeless… and the reason why it’s hopeless really does seem to have an anthropics flavor to it.
I knew that not any string of English words gets a probability, but I was naïve enough to think that all statements that are either true or false get one.
Well, I think this one is actually correct. But, as I said in the previous comment, the statement “Today is Monday” doesn’t actually have a coherent truth value throughout the probability experiment. It’s not either True or False. It’s either True or True and False at the same time!
I was hoping they this sequence of posts which kept saying “don’t worry about anthropics, just be careful with the basics and you’ll get the right answer” would show how to answer all possible variations of these “sleep study” questions… instead it turns out that it answers half the questions (the half that ask about the coin) while the other half is shown to be hopeless… and the reason why it’s hopeless really does seem to have an anthropics flavor to it.
We can answer every coherently formulated question. Everything that is formally defined has an answer Being careful with the basics allows to understand which question is coherent and which is not. This is the same principle as with every probability theory problem.
Consider Sleeping-Beauty experiment without memory loss. There, the event Monday xor Tuesday also can’t be said to always happen. And likewise “Today is Monday” also doesn’t have a stable truth value throughout the whole experiment.
Once again, we can’t express Beauty’s uncertainty between the two days using probability theory. We are just not paying attention to it because by the conditions of the experiment, the Beauty is never in such state of uncertainty. If she remembers a previous awakening then it’s Tuesday, if she doesn’t—then it’s Monday.
All the pieces of the issue are already present. The addition of memory loss just makes it’s obvious that there is the problem with our intuition.
Re: no coherent “stable” truth value: indeed. But still… if she wonders out loud “what day is it?” at the very moment she says that, it has an answer. An experimenter who overhears her knows the answer. It seems to me that you “resolve” this tension is that the two of them are technically asking a different question, even though they are using the same words. But still… how surprised should she be if she were to learn that today is Monday? It seems that taking your stance to its conclusion, the answer would be “zero surprise: she knew for sure she would wake up on Monday so no need to be surprised it happened”
And even if she were to learn that the coin landed tails, so she knows that this is just one of a total of two awakenings, she should have zero surprise upon learning the day of the week, since she now knows both awakenings must happen. Which seems to violate conservation of expected evidence, except you already said that the there’s no coherent probabilities here for that particular question, so that’s fine too.
This makes sense, but I’m not used to it. For instance, I’m used to these questions having the same answer:
P(today is Monday)?
P(today is Monday | the sleep lab gets hit by a tornado)
Yet here, the second question is fine (assuming tornadoes are rare enough that we can ignore the chance of two on consecutive days) while the first makes no sense because we can’t even define “today”
It makes sense but it’s very disorienting, like incompleteness theorem level of disorientation or even more
indeed. But still… if she wonders out loud “what day is it?” at the very moment she says that, it has an answer.
There is no “but”. As long as the Beauty is unable to distinguish between Monday and Tuesday awakenings, as long as the decision process which leads her to say the phrase “what day is it” works the same way, from her perspective there is no one “very moment she says that”. On Tails, there are two different moments when she says this, and the answer is different for them. So there is no answer for her
An experimenter who overhears her knows the answer. It seems to me that you “resolve” this tension is that the two of them are technically asking a different question, even though they are using the same words
Yes, you are correct. From the position of the experimenter, who knows which day it is, or who is hired to work only on one random day this is a coherent question with an actual answer. The words we use are the same but mathematical formalism is different.
For an experimenter who knows that it’s Monday the probability that today is Monday is simply:
P(Monday|Monday) = 1
For an experimenter who is hired to work only on one random day it is:
P(Monday|Monday xor Tuesday) = 1⁄2
But still… how surprised should she be if she were to learn that today is Monday? It seems that taking your stance to its conclusion, the answer would be “zero surprise: she knew for sure she would wake up on Monday so no need to be surprised it happened”
And even if she were to learn that the coin landed tails, so she knows that this is just one of a total of two awakenings, she should have zero surprise upon learning the day of the week, since she now knows both awakenings must happen.
Completely correct. Beauty knew that she would be awaken on Monday either way and so she is not surprised. This is a standard thing with non-mutually exclusive events. Consider this:
A coin is tossed and you are put to sleep. On Heads there will be a red ball in your room. On Tails there will be a red and a blue ball in your room. How surprised should you be to find a red ball in your room?
Which seems to violate conservation of expected evidence, except you already said that the there’s no coherent probabilities here for that particular question, so that’s fine too.
The appearance of violation of conservation of expected evidence comes from the belief that awakening on Monday and on Tuesday are mutually exclusive, while they are, in fact sequential.
This makes sense, but I’m not used to it. For instance, I’m used to these questions having the same answer:
P(today is Monday)?
P(today is Monday | the sleep lab gets hit by a tornado)
Yet here, the second question is fine (assuming tornadoes are rare enough that we can ignore the chance of two on consecutive days) while the first makes no sense because we can’t even define “today”
It makes sense but it’s very disorienting, like incompleteness theorem level of disorientation or even more
I completely understand. It is counterintuitive because evolution didn’t prepare us to deal with situations where an experience is repeated the same while we receive memory loss. As I write in the post:
If I forget what is the current day of the week in my regular life, well, it’s only natural to start from a 1⁄7 prior per day and work from there. I can do it because the causal process that leads to me forgetting such information can be roughly modeled as a low probability occurrence which can happen to me at any day.
It wouldn’t be the case, if I was guaranteed to also forget the current day of the week on the next 6 days as well, after I forgot it on the first one. This would be a different causal process, with different properties—causation between forgetting—and it has to be modeled differently. But we do not actually encounter such situations in everyday life, and so our intuition is caught completely flat footed by them.
The whole paradox arises from this issue with our intuition, and just like with incompleteness theorem (thanks for the flattering comparison, btw), what we need to do now is to re-calibrate our intuitions, make it more accustomed to the truth, preserved by the math, instead of trying to fight it.
Consider that in the real world Tuesday always happens after Monday. Do you agree or disagree: It is incorrect to model a real world agent’s knowledge about today being Monday with probability?
I think, I talk about something like you point to here:
If I forget what is the current day of the week in my regular life, well, it’s only natural to start from a 1⁄7 prior per day and work from there. I can do it because the causal process that leads to me forgetting such information can be roughly modeled as a low probability occurrence which can happen to me at any day.
It wouldn’t be the case, if I was guaranteed to also forget the current day of the week on the next 6 days as well, after I forgot it on the first one. This would be a different causal process, with different properties—causation between forgetting—and it has to be modeled differently. But we do not actually encounter such situations in everyday life, and so our intuition is caught completely flat footed by them.
I think you’d benefit more if you read them in the right order starting from here.
Sure, we want a lot of things. But apparently we can’t always have everything we want. To preserve the truth statements we need to follow the math wherever it leads and not push it where we would like it to go. And where the math goes—that what we should want.
This post refers several alternative problems where P(today is Monday) is a coherent probability, such as Single Awakening and No-Coin-Toss problems, which were introduced in the previous post. And here I explain the core principle: when there is only one day that is observed in the one run of the experiment you can coherently define what “today” means—the day from this iteration of the experiment. A random day. Monday xor Tuesday.
This is how wrong models try to treat Monday and Tuesday in Sleeping Beauty. As if they happen at random. But they do not. There is an order between them, and so they can’t be treated this way. Today can’t be Monday xor Tuesday, because on Tails both Monday and Tuesday do happen.
As a matter of fact, there is another situation where you can coherently talk about “today”, which I initially missed. “Today” can mean “any day”. So, for example, in Technicolor Sleeping beauty from the next post, you can have coherent expectation to see red with 50% and blue with 50% on the day of your awakening, because for every day it’s the same. But you still can’t talk about “probability that the coin is Heads today” because on Monday and Tuesday these probabilities are different.
So in practice, the limitation is only about Sleeping Beauty type problems where there are multiple awakenings with memory loss in between per one iteration of experiment, and no consistent probabilities for every awakening. But generally, I think it’s always helpful to understand what exactly you mean by “today” in any probability theory problem.
I do not decide anything axiomatically. But I notice that existent axioms of probability theory do not allow to have predictable update in favor of Tails in 100% of iterations of experiment, neither they allow a fair coin toss to have unconditional probability for Heads equal 1⁄3.
And then I notice that the justification that people came up with for such situations, about “new type of evidence” that a person receives is based on nothing but some philosopher wanting it to be this way. He didn’t come with any new math, didn’t prove any theorems. He simply didn’t immediately notice any contradictions in his reasoning. And when an example was broiught up, he simply doubled dowm/ Suffice to say, its absolutely not how anything supposed to work.
If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another.
You can also clearly see it in Statistical Analysis section of this post. I don’t see how this argument can be refuted, frankly. If you treat Tails&Monday and Tails&Tuesday as different elementary outcomes then you can’t possibly keep their correct order, and it’s in the definition of the experiment that on Tails, Monday awakening is always followed by the Tuesday awakening and that the Beauty is fully aware of it. Events that happen in sequence can’t be mutually exclusive and vice versa. I’m even formally proving it in the comments here.
And so, we can just accept that Tails&Monday and Tails&Tuesday are the same outcome of the probability space and suddenly everything adds up to normality. No paradox, no issues with statistical analysis, no suboptimal bets, no unjustified updates and no ungrounded philosophical handwaving. Seems like the best deal to me!
I’ve read the relevant part of your previous post and I have an idea that might help.
Consider the following problem: “Forgetful Brandon”: Adam flips a coin and does NOT show it to Brandon, but shouts YAY! with 50% probability if the coin is HEADS (he does not shout if the coin is TAILS). (Brandon knows Adam’s behaviour). However, Brandon is forgetful and if Adam doesn’t shout he doesn’t do any Bayesian calculation and goes off to have an icecream instead.
Adam doesn’t shout. What should Brandon’s credence of HEADS be after this?
I hope you agree that Brandon not actually doing the Bayesian calculation is irrelevant to the question. We should still do the Bayesian calculation if we are curious about the correct probability. Anytime Brandon updates he predictably updates in the direction of HEADS, but again: do we care about this? should we point out a failure of conservation of expected evidence? Again, I say: NO: What evidence is actually updated on in the thought experiment isn’t relevant to the correct theoretical Bayesian calculation: we could also imagine a thought-experiment with a person who does bayesian calculations wrong every time, but to the correct credence that would still be irrelevant. If you agree, I don’t see why you object to Sleeping Beauty not doing the calculation in case she is not awakened. (Which is the only objection you wrote under the “Freqency Argument” model)
EDIT: I see later you refer back to another post supposedly addressing a related argument, however, as that would be the fifth step of my recursion I will postpone inspecting it to tomorrow, but obviously, you can’t give the same response to Forgetful Brandon as in this case Bradon does observe the non-shout, he just doesn’t update on it. You also declare that P(Awake|Heads) to not be 1⁄2 and give “Beauty is awakened on Heads no matter what” as reason. You often do this mistake in the text, but here it’s too important to not mention that “Awake” does not mean that “Beauty is awakened.”, it means that “Beauty is awake” (don’t forget that centeredness!) and, of course, Beauty is not awake if it is Tuesday and the coin is heads.
EDIT2: I’m also curious what you would say about the problem with the following modification (“Uninformed Sleeping Beauty”): Initially the full rules of the experiment are NOT explained to Beauty, only that she will have to sleep in the lab, she will get a drug on Monday night which will make her forget her day and that she may or may not be awakened on Monday/Tuesday.
However, when she awakens the full rules are explained to her, ie that she will not get awakened on Tuesday if the coin is HEADS.
Note that in this case you can’t object that the prior distribution gives non-zero probability to Tuesday&Heads as Beauty unquestionably has 1⁄4 credence in that before they explain the full rules to her.
EDIT3: Missed that Beauty might think it’s wednessday too in the previous case before being told the full rules, so let’s consider instead the following (“Misinformed Sleeping Beauty”): Initially the full rules of the experiment are NOT explained to Beauty, only that she will have to sleep in the lab and that she will get a drug on Monday night which will make her forget her day. Furthermore, she is told the falsehood that she will be awakened on Monday AND Tuesday whatever happens!
However, when she awakens the full rules are explained to her, ie that she won’t get/wouldn’t have gotten awakened on Tuesday if the coin is HEADS.
Note that in this case you can’t object that the prior distribution gives non-zero probability to Tuesday&Heads as Beauty unquestionably has 1⁄4 credence in that before they explain the actual rules to her.
I’ll start from adressing the actual crux of our disagreement
As I’ve written in this post, you can’t just said magical word “centredness” and think that you’ve solved the problem. If you wont a model that can have an event that changes its truth predicate with the passage of time during the same iteration of the probability experiment—you need to formally construct such model, rewriting all the probability theory from scratch, because our current probability theory doesn’t allow that.
In probability theory, one outcome of a sample space is realized per an iteration of experiment. And so for this iteration of experiment, every event which includes this outcome is considered True. All the “centred” models therefore, behave as if Sleeping Beauty consist of two outcomes of probability experiment. As if Monday and Tuesday happen at random and that to determine whether the Beauty has another awakening the coin is tossed anew. And because of it they contradict the conditions of the experiment, according to which Tails&Tuesday awakening always happen after Tails&Monday. Which is shown in Statistical Analysis section. It’s a model for random awakening not for current awakening that. Because current awakening is not random.
So no, I do not do this mistake in the text. This is the correct way to talk about Sleeping Beauty. Event “The Beauty is awaken in this experement” is properly defined. Event “The Beauty is awake on this particular day” is not, unless you find some new clever way to do it—feel free to try.
I must say, this problem is very unhelpful to this discussion. But sure, lets analyze it regardless.
I suppose? Such questions are usually about ideal rational agents, so yes, it shouldn’t matter, what a specific non-ideal agent does, but then why even add this extra complication to the question if it’s irrelevant?
Well, that’s his problem, honestly, I though we agreed that what he does is irrelevant to the question.
Also his behavior here is not as bad as what you want the Beauty to do—at least Brandon doesn’t update in favor of Heads on literally every iteration of experiment.
I mean, if we want to explain Brandon’s failure at rationality—we should. The reason why Brian’s behaviour is not rational is exactly that—he fails at conservation of expected evidence. There are two possible signals that he may receive: “Yay”, “No yay and getting ice cream”. These signals are differently correclated with the outcome of the coin toss. If he behaved rationally he updated on both of them in opposite direction, therefore following the conservation of expected evidence.
In principle, it’s possible to construct a better example where Brandon doesn’t update not because of his personal flaws in rationality, but due to the specifics of the experiment. For example, if he couldn’t be sure when exactly Adam is supposed to shout. Say, Adam intended to shout one minute after he saw the result of the coin toss, but Brandon doesn’t knows it, according to his information Adam shouts “Yay” in the interval of three minutes sicnce the coin was tossed. And so he is still waiting, unupdated aftre just one minute.
But then, it won’t be irrelevant to the question as you seem to want it for some reason.
I do not object to the fact that the Beauty doesn’t do calculation in case she is not awakened—she literally can’t do it due to the setting of the experiment.
I object to Beauty predictably updating in favor of Tails when she awakens in every iteration of the experiment which is a blatant contradiction of conservation of expected evidence. Updating model, as a whole descrives Observer Sleeping Beauty problem, where the observer can legitimately not see that the Beauty is awake and therefore update on awakening is lawful
See also Towards the Correct Model where I point to core mathematical flaw of Frequency Argument—ignoring the fact that it works only when P(Heads|Awake) = 1⁄2 which is wrong for Sleeping Beauty. And, of course, Updating Model fails the Statistical Analysis as every other “centred” model.
When the Beauty doesn’t know the actual setting of the experiment she has a different model, fitting her uninformed state of knowledge, when she is told what is actually going on she discards it and starts using the correct model from this post.
Metapoint: You write a lot of things in your comments with which I usually disagree, however, I think faster replies are more useful in these kind of conversations than complete replies, so at first, I’m only going to reply to points I consider the most important at the time. If you disagree and believe writing complete replies is more useful, do note (however, my experience for that case is that after a while, instead of writing a comment containing a reply to the list of points the other party brought up, I simply drop out of the conversation and I can’t guarantee that this won’t happen here)
My whole previous comment was meant to address the part of your comment I quoted. Here it is again:
With my previous comment I meant to show you that if you don’t start out with “centered worlds don’t work”, you CAN make it work (very important: here, I haven’t yet said that this is how it works or how it ought to work, merely that it CAN work without some axiom of probability getting hurt).
Still, I struggle to see what your objection is apart form your intuition that “NO! It can’t work!”
Again, I understand that in the theory you built up this is how it would work, that’s not what i want to argue (yet). I want to argue how it CAN work in another way with credences/centeredness/bayesianism. To counterargue you would have to show that NO, it can’t work that way. You would have to show that for some reason because of some axiom of probability or sth, we can’t model Beauty’s credences with probability the moment they learn the relevant info after waking up.
Discard the concept of experiment as it might confuse you. If you want to understand how centered world/credence/bayesian epistemology works (to then see that it DOES work), experiment isn’t a good word, because it might lock you into a third-person view, where of course, centeredness does not work (of course, after you understood that bayesianism CAN work, we can reintroduce the word with some nuance).
Your statistical analysis is of course also assumes the third-person/not centered view, so of course it won’t help you, but again, we should first talk about whether centeredness CAN work or not. Assuming that it can’t and deriving stuff from that does not prove that it can’t work.
The clever way isn’t that clever to be honest. It’s literally just: don’t assume that it does not work and try it.
I didn’t start believing that “centred worlds don’t work”. I suspect you got this impression mostly because you were reading the posts in the wrong order. I started from trying the existent models noticed when they behave weirdly if we assume that they are describing Sleeping Beauty and then noticed that they are actually talking about different problems—for which their behavior is completely normal.
And then, while trying to understand what is going on, I stumbled at the notion of centred possible worlds and their complete lack of mathematical justification and it opened my eyes. And then I was immediately able to construct the correct model, which completely resolves the paradox, adds up to normality and has no issues whatsoever.
But in hindsight, if I did start from the assumption that centred possible worlds do not work, - that would be the smart thing to do and I’d save me a lot of time.
Well, you didn’t. All this time you’ve just been insisting on a privileged treatment for them: “Can work until proven otherwise”. Now, that’s not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties. I’m more than willing to listen to such attempts. The problem is—there are none. People just seem to think that saying “first person perspective” allows them to build sample space from non-mutually exclusive outcomes.
It’s like you didn’t even read my posts or my comments.
By definition of a sample space it can be constructed only from elementary outcomes which has to be mutually exclusive. Tails&Monday and Tails&Tuesday are not mutually exclusive—they happen to the same person in the same iteration of probability experiment during the same outcome of the coin toss. “Centredness” framework attempts to treat them as elementary outcomes, regardless. Therefore, it contradicts the definition of a sample space.
This is what statistical analysis clearly demonstrates. If a mathematical probabilistic model fits some real world process—then the outcomes it produces has to have the same statistical properties as the outcomes of real world process. All “centred” models produce outcomes with different properties, compared to what actually running Sleeping Beauty experiment would do. Therefore they do not correctly fit the Sleeping Beauty experiment.
Don’t mix bayesianism and credences with this “centredness” nonsense. Bayesianism is not in trouble—I’ve been appealing to Bayes theorem a lot throughout my posts and it’s been working just fine. Likewise, credence in the event is simply probability conditional on all the evidence—I’m exploring all manner of conditional probabilities in my model. Bayesianism and credences are not some “another way” It is the exact same way. It’s probability theory. “Centredness”—is not.
I don’t understand what you mean by “third-person view” here, and I suspect neither do you.
Statistical test is very much about Beauty’s perspective—only awakenings that she experiences are noted down, not all the states of the experiment. Heads&Tuesday isn’t added to the list, which would be the case if we were talking about third person perspective.
On the other hand, when you were talking about justifying an update on awakening, you are treating the situation from the observer perspective—someone who has non zero probability for Heads&Tuesday outcome and could realistically not observe the Beauty being awakened and, therefore, updates when sees her indeed awaken.
“Centred” models do not try to talk about Beauty’s perspective. They are treating different awakened states of the Beauty as if they are different people, existing independently of each other, therefore contradicting the conditions of the setting, according to which all the awakenings are happening to the same person. Unless, of course, there is some justification why treating Beauty’s awakened states this way is acceptable. The only thing resembling such justification, that I’ve encountered, is vaguely pointing towards the amnesia that the Beauty is experiencing, with which I deal in the section Effects of Amnesia. If there is something else—I’m open to consider it, but the initial burden of proof is on the “centredness” enthusiasts.
This whole conversation isn’t about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple language of the given formal system they are working in, they are just simply so much of an expert that they can transmit and receive the given information more efficiently by speaking on a higher level of abstraction.
It is not possible to translate the conversation that we’re having to a simple formal system as it’s about how we should/can model some aspect of reality (which is famously dirty and complicated) with some specific mathematical object.
To be more concrete: I want to show you that we can model (and later that we should indeed) a person’s beliefs at some given point in time with probability spaces.
This is inherently a philosophical and not a mathematical problem and I don’t see how you don’t understand this concept and would appreciate if you could elaborate on this point as much as possible.
You keep insisting that
If we are being maximally precise, then NO: the math of probability spaces prescribes a few formal statements which (this is very important), in some cases, can be used to model experiments and events happening or not happening in reality, but the mathematical objects itself have no concept of ‘experiment’ or ‘time’ or anything like those. I won’t copy it here, but you can look these up on the net yourself, if you want: here is one such source. Don’t be confused by the wiki sometimes using English words, rest assured, any mathematician could translate it to any sufficiently expressive, simple formal system using variable names like a1,x3564789, etc.. (If you really think it would help you and you don’t believe what I’m saying otherwise, I can translate it to first order logic for you)
Now that we hopefully cleared up that we are not arguing about math, it’s time for more interesting parts:
Can a probability space model a person’s beliefs at a certain point in time?
Yes, it can!
First, I would like to show you that your solution does NOT model a person’s belief at a certain time:
People have certain credences in the statement “Today is Monday.”
Do note that the above statement is fully about reality and not about math in any way and so it leans on our knowledge about humans and their minds.
You can test it in various ways: eg. asking people “hey, sorry to bother you, is today Monday?”, setting up an icecream stand which is only open on Monday in one direction from the lab, another in the opposite direction which is only open on Tuesday and making this fact known to subjects of an experiment who are then asked to give you icecream and observe where the go, etc..
In particular, Beauty, when awoken, has a certain credence in the statement “Today is Monday.”
This follows from 1.
Your model does not model Beauty’s credences in the statement “Today is Monday”.
You can see this various ways, and your model is pretty weird, but because I believe you will agree with this, I won’t elaborate here, unless asked later.
Therefore, your solution does NOT model a person’s belief at a certain time.
This follows from 2 and 3.
Before I go further, I think I will ask you whether everything is clear and whether you agree with everything I wrote so far.
The tragedy of the whole situation is that people keep thinking that.
Everything is “about philosophy” until you find a better way to formalize it. Here we have a better way to formalize the issue, which you keep ignoring. Let me spell it for you once more:
If a mathematical probabilistic model fits some real world process—then the outcomes it produces has to have the same statistical properties as the outcomes of real world process.
If we agree on this philosophical statement, then we reduced the disagreement to a mathematical question, which I’ve already resolved in the post. If you disagree, then bring up some kind of philosophical argument which we will be able to explore.
I’m not. And frankly, it baffles me that you think that you need to explain that it’s possible to talk about math using natural language, to a person who has been doing it for multiple posts in a row.
https://en.wikipedia.org/wiki/Experiment_(probability_theory)
The more I post about anthropics the clearer it becomes that I should’ve started with posting about probability theory 101. My naive hopes that average LessWrong reader is well familiar with the basics and just confused about more complicated cases are crushed beyond salvation.
This question is vague in a similar manner to what I’ve seen from Lewis’s paper. Let’s specify it, so that we both understand what we are talking about
Did you mean to ask 1. or 2:
Can a probability space at all model some person’s belif in some circumstance at some specific point in time?
Can a probability space always model any person’s belief in any circumstances at any unspecified point in time?
The way I understand it, we agree on 1. but disagree on 2. There are definetely situations where you can correctly model uncertanity about time via probability theory. As a matter of fact, it’s most of the cases. You won’t be able to resolve our disagreement by pointing to such situations—we agree on them.
But you seem to have generalized that it means that probability theory always has to be able to do it. And I disagree. Probability space can model only aspects of reality that can be expressed in terms of it. If you want to express uncertanity between “today is Monday” or “today is Tuesday” you need a probability space for which Monday and Tuesday are mutually exclusive outcomes and it’s possible to design a specific setting—like the one in Sleeping Beauty—where they are not, where on the same trial both Monday and Tuesday are realized and the participant is well aware of it.
No she does not. And it’s easy to see if you actually try to formally specify what is meant here by “today” and what is meant by “today” in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment.
Let’s make it three different situations:
No-Coin-Toss problem.
Two awakenings with memory loss, regardless of the outcome of the coin.
Regular Sleeping Beauty
Your goal is to formally define “today” using first order logic so that a person participating in such experiments could coherently talk about event “today the coin is Heads”.
My claim is: it’s very easy to do so in 1. It’s a harder, but still doable in 2. And it’s not possible to do so in 3, without contradicting the math of probability theory.
This is not a question about simply probability/credence. It also involves utilities and it’s implicitly assumed that the participant preferes to walk for less distance than more. Essentially you propose a betting scheme where:
P(Monday)U(Monday) = P(Tuesday)U(Tuesday)
According to my model P(Monday) = 1, P(Tuesday) = 1⁄2, so:
2U(Monday) = U(Tuesday), therefore odds are 2:1. As you see, it deals with such situations without any problem.
I said that I can translate the math of probability spaces to first order logic, and I explicitly said that our conversation can NOT be translated to first order logic as proof that it is not about math, rather, it’s about philosophy. Please, reread that part of my previous comment.
That is not what I explained and I suggest you reread that part. Here it is again:
The structure of my argument here is the following:
Math is about concepts in formal systems, therefore an argument about math can be expressed in some simple, formal language
We are having an argument which can’t be translated to a formal system.
Therefore, we are not arguing about math.
Ah yes, clearly, the problem is that I don’t understand basic probability theory. (I’m a bit sad that this conversation happened to take place with my pseudonymous account.) In my previous comment, I explicitily prepared to preempt your confusion about seeing the English word ‘experiment’ with my paragraph (the part of it that you, for some reason, did not quote), and specifically linking a wiki which only contains the mathematical part of ‘probability’, and not philosophical interpretations that are paired with it commonly, but alas, it didn’t matter.
If you are not ready to accept that people have various levels of belief in the statement “Today is Monday” at all times, then I don’t think this conversation can go anywhere, to be honest. This is an extremely basic fact about reality.
EDIT: gears, in the first part you selected i″m answering an accusation of bluffing in a matter-of-fact way, how is that too combative? Also, fell free to chime in at any point it is an open forum after all..
Meta: the notion of writing probability 101 wasn’t addressed to you specifically. It was a release of my accumulated frustration of not-particularly productive arguments with several different people which again and again led to the realizations that the crux of disagreement lies in the most basics, from which you are only one person.
You are confusing to talk to, with your manner to rise seemingly unrelated points and then immediately drop them. And yet you didn’t deserve the full emotional blow that you apparently received and I’m sorry about it.
Writing a probability 101 seems to me as a constructive solution to such situations, anyway. It would provide opportunity to resolve this kinds of disagreements as soon as they arise, instead of having to backtrack to them from a very specific topic. I may still add it to my todo list.
i figured that either you don’t know what “probability experiment” is or you are being confusing on purpose. I prefer to err in the direction of good faith, so the former was my initial hypothesis.
Now, considering that you admit that you you were perfectly aware of what I was talking about, to the point where you specifically tried to cherry pick around it, the latter became more likely. Please don’t do it anymore. Communication is hard as it is. If you know what a well established thing is, but believe it’s wrong—just say so.
Nevertheless, from this exchange, I believe, I now understand that you think that “probability experiment” isn’t a mathematical concept, but a philosophical one. I could just accept this for the sake of the argument, and we would be in a situation where we have a philosophical consensus about an issue, to a point where it’s a part of standard probability theory course that is taught to students, and you are trying to argue against it, which would put quite some burden of proof on your shoulders.
But, as a matter of fact, I don’t see anything preventing us from formally defining “probability experiment”. We already have a probability space. Now we just need a variable going from 1 to infinity for the iteration of probability experiment, and a function which takes sample space and the value of this variable as an input and returns one outcome that is realized in this particular iteration.
Sorry, I misunderstood you.
Also a reminder that you you still haven’t addressed this:
Anyway, are you claiming that it’s impossible to formalize what “today” in “today the coin is Heads” means even in No-Coin-Toss problem? Why are you so certain that people have to have credence in this statement then? Would you then be proven wrong if I indeed formally specify what “Today” means?
Because, as I said, it’s quite easy.
Today = Monday xor Tuesday
P(Today) = P(Monday xor Tuesday) = 1
P(Heads|Today) = P(Heads|Monday xor Tuesday) = P(Heads) = 1⁄3
Likewise we can talk about “Today is Monday”:
P(Monday|Today) = P(Monday|Monday xor Tuesday) = P(Monday) = 1⁄2
Now, do you see, why this method doesn’t work for Two Awakenings Either Way and Sleeping Beauty problems?
In reality people may have all kind of confused beliefs and ill-defined concepts in their heads. But the question of Sleeping Beauty problem is about what the ideal rational agent is supposed to believe. When I say “Beauty does not have such credence” I mean, that an ideal rational agent ought not to. That probability of such event is ill-defined.
As you may’ve noticed I’ve successfully explained the difference in real life beliefs about optimal actions in the ice-cream stand scenario, without using such ill-defined probabilities.
I hope it’s okay if I chime in (or butt in). I’ve been vaguely trying to follow along with this series, albeit without trying too hard to think through whether I agree or disagree with the math. This is the first time that what you’ve written has caused to go “what?!?”
First of all, that can’t possibly be right. Second of all, it goes against everything you’ve been saying for the entire series. Or maybe I’m misunderstanding what you meant. Let me try rephrasing.
(One meta note on this whole series that makes it hard for me to follow sometimes: you use abbreviations like “Monday” as shorthand for “a Monday awakening happens” and expect people to mentally keep track that this is definitely not shorthand for “today is Monday” … I can barely keep track of whether heads means one awakening or two… maybe should have labeled the two sides of the coin ONE and TWO instead is heads and tails)
Suppose someone who has never heard of the experiment happens to call sleeping beauty on her cell phone during the experiment and ask her “hey, my watch died and now I don’t know what day it is; could you tell me whether today is Monday or Tuesday?” (This is probably a breach of protocol and they should have confiscated her phone until the end, but let’s ignore that.).
Are you saying that she has no good way to reason mathematically about that question? Suppose they told her “I’ll pay you a hundred bucks if it turns out you’re right, and it costs you nothing to be wrong, please just give me your best guess”. Are you saying there’s no way for her to make a good guess? If you’re not saying that, then since probabilities are more basic than utilities, shouldn’t she also have a credence?
In fact, let’s try a somewhat ad-hoc and mostly unprincipled way to formalize this. Let’s say there’s a one percent chance per day that her friend forgets what day it is and decides to call her to ask. (One percent sounds like a lot but her friend is pretty weird) Then there’s a 2% chance of it happening if there are two awakenings, and one percent if there’s only one awakening. If there are two awakenings then Monday and Tuesday are equally likely; if there’s only one awakening then it’s definitely Monday. Thus, given that her friend is on the phone, today is more likely to be Monday than Tuesday.
Okay, maybe that’s cheating… I sneaked in a Rare Event. Suppose we make it more common? Suppose her friend forgets what day it is 10% off the time. The logic still goes through: given that her friend is calling, today is more likely to be Monday than Tuesday.
Okay, 10% is still too rare. Let’s try 100%. This seems a bit confusing now. From her friends perspective, Monday is just as good as Tuesday for coming down with amnesia. But from sleeping beauty’s perspective, GIVEN THAT the experiment is not over yet, today is more likely to be Monday than Tuesday. This is true even though she might be woken up both days.
Or is everything I just wrote nonsensical?
I understand that it all may be somewhat counterintuitive. I’ll try to answer whatever questions you have. If you think you have some way to formally define what “Today” means in Sleeping Beauty—feel free to try.
Seems very much in accordance with what I’ve been saying.
Throughout the series I keep repeating the point that all we need to solve anthropics is to follow probability theory where it leads and then there will be no paradoxes. This is exactly what I’m doing here. There is no formal way to define “Today is Monday” in Sleeping Beauty and so I simply accept this, as the math tells me to, and then the “paradox” immediately resolves.
Good question. First of all, as we are talking about betting I recommend you read the next post, where I explore it in more details, especially if you are not fluent in expected utility calculations.
Secondly, we can’t ignore the breach of the protocol. You see, if anything breaks the symmetry between awakening, the experiment changes in a substantial manner. See Rare Event Sleeping Beauty, where probability that the coin is Heads can actually be 1⁄3.
But we can make a similar situation without breaking the symmetry. Suppose that on every awakening a researcher comes to the room and proposes the Beauty to bet on which day it currently is. At which odds should the Beauty take the bet?
This is essentially the same betting scheme as ice-cream stand, which I deal with in the end of the previous comment.
I tried to formalize the three cases you list in the previous comment. The first one was indeed easy. The second one looks “obvious” from symmetry considerations but actually formalizing seems harder than expected. I don’t know how to do it. I don’t yet see why the second should be possible while the third is impossible.
Exactly! I’m glad that you actually engaged with the problem.
The first step is to realize that here “today” can’t mean “Monday xor Tuesday” because such event never happens. On every iteration of experiment both Monday and Tuesday are realized. So we can’t say that the participant knows that they are awakened on Monday xor Tuesday.
Can we say that participant knows that they are awakened on Monday or Tuesday? Sure. As a matter of fact:
P(Monday or Tuesday) = 1
P(Heads|Monday or Tuesday) = P(Heads) = 1⁄2
This works, here probability that the coin is Heads in this iteration of the experiment happens to be the same as what our intuition is telling us P(Heads|Today) is supposed to be, however we still can’t define “Today is Monday”:
P(Monday|Monday or Tuesday) = P(Monday) = 1
Which doesn’t fit our intuition.
How can this be? How can we have a seeminglly well-defined probability for “Today the coin is Heads” but not for “Today is Monday”? Either “Today” is well-defined or it’s not, right? Take some time to think about it.
What do we actually mean when we say that on an awakening the participant supposed to believe that the coin is Heads with 50% probability? Is it really about this day in particular? Or is it about something else?
The answer is: we actually mean, that on any day of the experiment be it Monday or Tuesday the participant is supposed to believe that the coin is Heads with 50% probability. We can not formally specify “Today” in this problem but there is a clever, almost cheating way to specify “Anyday” without breaking anything.
This is not easy. It requires a way to define P(A|B), when P(B) itself is undefined which is unconventional. But, moreover, it requires symmetry. P(Heads|Monday) has to be equal to P(Heads|Tuesday) only then we have a coherent P(Heads|Anyday).
This makes me uncomfortable. From the perspective of sleeping beauty, who just woke up, the statement “today is Monday” is either true or false (she just doesn’t know which one). Yet you claim she can’t meaningfully assign it a probability. This feels wrong, and yet, if I try to claim that the probability is, say, 2⁄3, then you will ask me “in what sample space?” and I don’t know the answer.
What seems clear is that the sample space is not the usual sleeping beauty sample space; it has to run metaphorically “skew” to it somehow.
If the question were “did the coin land on heads” then it’s clear that this is question is of the form “what world am I in?”. Namely, “am I in a world where the coin landed on heads, or not?”
Likewise if we ask “does a Tuesday awakening happen?”… that maps easily to question about the coin, so it’s safe.
But there should be a way to ask about today as well, I think. Let’s try something naive first and see where it breaks. P(today is Monday | heads) = 100% is fine. (Or is that tails? I keep forgetting.) P(today is Monday | tails) = 50% is fine too. (Or maybe it’s not? Maybe this is where I’m going working? Needs a bit of work but I suspect I could formalize that one if I had to.) But if those are both fine, we should be able to combine them, like so: heads and tails are mutually exclusive and one of them must happen, so: P(today is Monday) = P(heads) • P(today is Monday | heads) + P(tails) • P(today is Monday | tails) = 0.5 + .25 = 0.75 Okay, I was expecting to get 2⁄3 here. Odd. More to the point, this felt like cheating and I can’t put my finger on why. maybe need to think more later
Where does the feeling of wrongness come from? Were you under impression that probability theory promised us to always assign some measure to any statement in natural language? It just so happens that most of the time we can construct an appropriate probability space. But the actual rule is about whether or not we can construct a probability space, not whether or not something is a statement in natural language.
Is it really so surprising that a person who is experiencing amnesia and the repetetion of the same experience, while being fully aware of the procedure can’t meaningfully assign credence to “this is the first time I have this experience”? Don’t you think there has to be some kind of problems with Beauty’s knowledge state? The situation whre due to memory erasure the Beauty loses the ability to coherently reason about some statements makes much more sense than the alternative proposed by thirdism—according to which the Beauty becomes more confident in the state of the coin than she would’ve been if she didn’t have her memory erased.
Another intuition pump is that “today is Monday” is not actually True xor False from the perspective of the Beauty. From her perspective it’s True xor (True and False). This is because on Tails, the Beauty isn’t reasoning just for some one awakening—she is reasoning for both of them at the same time. When she awakens the first time the statement “today is Monday” is True, and when she awakens the second time the same statement is False. So the statement “today is Monday” doesn’t have stable truth value throughout the whole iteration of probability experiment. Suppose that Beauty really does not want to make false statements. Deciding to say outloud “Today is Monday”, leads to making a false statement in 100% of iterations of experiemnt when the coin is Tails.
Here you are describing Lewis’s model which is appropriate for Single Awakening Problem. There the Beauty is awakened on Monday if the coin is Heads, and if the coin is Tails, she is awakened either on Monday or on Tuesday (not both). It’s easy to see that 75% of awakening in such experiment indeed happen on Monday.
It’s very good that you notice this feeling of cheating. This is a very important virtue. This is what helped me construct the correct model and solve the problem in the first place—I couldn’t accept any other—they all were somewhat off.
I think, you feel this way, because you’ve started solving the problem from the wrong end, started arguing with math, instead of accepting it. You noticed that you can’t define “Today is Monday” normally so you just assumed as an axiom that you can.
But as soon as you assume that “Today is Monday” is a coherent event with a stable truth value throughout the experiment, you inevitably start talking about a different problem, where it’s indeed the case. Where there is only one awakening in any iteration of probability experiment and so you can formally construct a sample space where “Today is Monday” is an elementary mutually exclusive outcome. There is no way around it. Either you model the problem as it is, and then “Today is Monday” is not a coherent event, or you assume that it is coherent and then you are modelling some other problem.
Ah, so I’ve reinvented the Lewis model. And I suppose that means I’ve inherited its problem where being told that today is Monday makes me think the coin is most likely heads. Oops. And I was just about to claim that there are no contradictions. Sigh.
Okay, I’m starting to understand your claim. To assign a number to P(today is Monday) we basically have two choices. We could just Make Stuff Up and say that it’s 53% or whatever. Or we could at least attempt to do Actual Math. And if our attempt at actual math is coherent enough, then there’s an implicit probability model lurking there, which we can then try to reverse engineer, similar to how you found the Lewis model lurking just beneath the surface of my attempt at math. And once the model is in hand, we can start deriving consequences from it, and Io and behold, before long we have a contradiction, like the Lewis model claiming we can predict the result of a coin flip that hasn’t even happened yet just because we know today is Monday.
And I see now why I personally find the Lewis model so tempting… I was trying to find “small” perturbations of the experiment where “today is Monday” clearly has a well defined probability. But I kept trying use Rare Events to do it, and these change the problem even if the Rare Event is not Observed. (Like, “supposing that my house gets hit by a tornado tomorrow, what is the probability that today is Monday” is fine. Come to think of it, that doesn’t follow Lewis model. Whatever, it’s still fine.)
As for why I find this uncomfortable: I knew that not any string of English words gets a probability, but I was naïve enough to think that all statements that are either true or false get one. And in particular I was hoping they this sequence of posts which kept saying “don’t worry about anthropics, just be careful with the basics and you’ll get the right answer” would show how to answer all possible variations of these “sleep study” questions… instead it turns out that it answers half the questions (the half that ask about the coin) while the other half is shown to be hopeless… and the reason why it’s hopeless really does seem to have an anthropics flavor to it.
Well, I think this one is actually correct. But, as I said in the previous comment, the statement “Today is Monday” doesn’t actually have a coherent truth value throughout the probability experiment. It’s not either True or False. It’s either True or True and False at the same time!
We can answer every coherently formulated question. Everything that is formally defined has an answer Being careful with the basics allows to understand which question is coherent and which is not. This is the same principle as with every probability theory problem.
Consider Sleeping-Beauty experiment without memory loss. There, the event Monday xor Tuesday also can’t be said to always happen. And likewise “Today is Monday” also doesn’t have a stable truth value throughout the whole experiment.
Once again, we can’t express Beauty’s uncertainty between the two days using probability theory. We are just not paying attention to it because by the conditions of the experiment, the Beauty is never in such state of uncertainty. If she remembers a previous awakening then it’s Tuesday, if she doesn’t—then it’s Monday.
All the pieces of the issue are already present. The addition of memory loss just makes it’s obvious that there is the problem with our intuition.
Re: no coherent “stable” truth value: indeed. But still… if she wonders out loud “what day is it?” at the very moment she says that, it has an answer. An experimenter who overhears her knows the answer. It seems to me that you “resolve” this tension is that the two of them are technically asking a different question, even though they are using the same words. But still… how surprised should she be if she were to learn that today is Monday? It seems that taking your stance to its conclusion, the answer would be “zero surprise: she knew for sure she would wake up on Monday so no need to be surprised it happened”
And even if she were to learn that the coin landed tails, so she knows that this is just one of a total of two awakenings, she should have zero surprise upon learning the day of the week, since she now knows both awakenings must happen. Which seems to violate conservation of expected evidence, except you already said that the there’s no coherent probabilities here for that particular question, so that’s fine too.
This makes sense, but I’m not used to it. For instance, I’m used to these questions having the same answer:
P(today is Monday)?
P(today is Monday | the sleep lab gets hit by a tornado)
Yet here, the second question is fine (assuming tornadoes are rare enough that we can ignore the chance of two on consecutive days) while the first makes no sense because we can’t even define “today”
It makes sense but it’s very disorienting, like incompleteness theorem level of disorientation or even more
There is no “but”. As long as the Beauty is unable to distinguish between Monday and Tuesday awakenings, as long as the decision process which leads her to say the phrase “what day is it” works the same way, from her perspective there is no one “very moment she says that”. On Tails, there are two different moments when she says this, and the answer is different for them. So there is no answer for her
Yes, you are correct. From the position of the experimenter, who knows which day it is, or who is hired to work only on one random day this is a coherent question with an actual answer. The words we use are the same but mathematical formalism is different.
For an experimenter who knows that it’s Monday the probability that today is Monday is simply:
P(Monday|Monday) = 1
For an experimenter who is hired to work only on one random day it is:
P(Monday|Monday xor Tuesday) = 1⁄2
Completely correct. Beauty knew that she would be awaken on Monday either way and so she is not surprised. This is a standard thing with non-mutually exclusive events. Consider this:
A coin is tossed and you are put to sleep. On Heads there will be a red ball in your room. On Tails there will be a red and a blue ball in your room. How surprised should you be to find a red ball in your room?
The appearance of violation of conservation of expected evidence comes from the belief that awakening on Monday and on Tuesday are mutually exclusive, while they are, in fact sequential.
I completely understand. It is counterintuitive because evolution didn’t prepare us to deal with situations where an experience is repeated the same while we receive memory loss. As I write in the post:
The whole paradox arises from this issue with our intuition, and just like with incompleteness theorem (thanks for the flattering comparison, btw), what we need to do now is to re-calibrate our intuitions, make it more accustomed to the truth, preserved by the math, instead of trying to fight it.
Thanks :) the recalibration may take a while… my intuition is still fighting ;)
Consider that in the real world Tuesday always happens after Monday. Do you agree or disagree: It is incorrect to model a real world agent’s knowledge about today being Monday with probability?
Again, that depends.
I think, I talk about something like you point to here: