There is no change of probabilities, because there are no probabilities without outcome space.
Are you saying that probability of Heads is in principle not defined before the Beauty awakens in the experiment? Or just that it can’t be defined if we assume that Elga’s model is true? Because if it’s the latter—it’s not a point in favor of Elga’s model.
And why “the model doesn’t represent “today is Monday”″ is not weird
Because, such event doesn’t have a well defined probability in the setting of Sleeping Beauty. I’ve shown it in Math vs Intuition section, but probably this wasn’t clear enough. Lets walk through it once more.
Try rigorously specifying the event “today is Monday” in the Sleeping Beauty problem. What does “today” mean?
For example, in No-Coin-Toss problem it means Monday xor Tuesday or, in other words it’s a variable from the set: today ∊ {Monday, Tuesday}. But in Sleeping Beauty we can’t define “today” this way, because on Tails both Monday and Tuesday happen. The variable “today” has to take two different values during the experiment.
Or you may define today as Monday or Tuesday. But then the event “today is Monday” always happens and P(Monday)=1.
when that was what you wanted to know in the first place?
The main question of the Sleeping Beauty problem is what her credence for Heads should be when she is awakened, while participating in the experiment. This is the question my model is answering. People just mistakenly assume that it means “What is you credence specifically today”, because they think that “today” is a coherent variable in Sleeping Beauty, while it’s not.
Wouldn’t it fail the statistical test if we simulated only subjective experience of Beauty?
We are simulating only subjective experience of the Beauty. We are not adding to the list Heads&Tuesday, during which the Beauty is asleep, for instance, only the states when she is awake and thus able to subjectively experience whatever is going on. And these subjective experiences still exist in the setting of the experiment, where Tuesday follows Monday.
I suppose you mean something else by “subjective experience of Beauty”? What is it? Are you under the impression that Beauty subjectively experiences her awakening in random order due to the amnesia? I deal with this argument in the Effects of the Amnesia section.
It’s not a theorem of probability theory that Sleeping Beauty is a problem, where Tails&Monday and Tails&Tuesday happen sequentially and, therefore, are not mutually exclusive.
It’s just the definition of the problem, that when the coin is Tails first Monday awakening and then Tuesday awakening happens. We do not have a disagreement here, do we? The crux then has to be that you believe that probability theory allows for events which are simultaneously sequential and mutually exclusive.
So let’s prove a simple lemma: If two event are sequential they are not mutually exclusive.
Let A1 and A2 be two sequential events so that A2 happens always and only after A1.
Now, suppose they are also mutually exclusive. Then we can construct a sample space Ω={A1, A2} and a probability space (Ω,F,P), satisfying the axioms of probability theory.
By Kolmogorov’s third axiom:
P(A1) + P(A2) = 1
There are two possibilities, either P(A1) = 0 or P(A1) > 0
if P(A1) = 0 then P(A2) =1, but this contradicts the initial premise that A2 happens only after A1.
so P(A1)> 0 and P(A2) < 1
But then with 1-P(A2) probability A2 will not happen after A1 has happened. This contradicts the initial premise that A2 always happens after A1.
Q.E.D.
But it’s not math, it’s your objectivity biased “no-weirdness” principle.
No, the abstract talk about weirdness was in the previous post where I was just hinting at things and prompting the readers to notice their own confusion and thus derive the right answer on their own.
Here the situation is quite different. We can now directly observe that models from the previous post produce series of outcomes statistically different from Sleeping Beauty conditions and treat sequential events as mutually exclusive, due to completely unfounded assumption of some philosopher.
Without it you can use Elga’s model to get more knowledge for yourself in some sense.
If two models are claiming that you get different amount of knowledge while observing the same data one is definitely wrong. I’ve already explained what does Elga’s model does wrong—it’s talking about a random awakening in Sleeping Beauty. So if we think that it is correct for a current awakening we have to smuggle the notion that current awakening is random, which isn’t specified in the condition of the experiment. Which may give you an impression that you learn more, but that’s because you’ve unlawfully assumed a thing.
You’ve shown that there is a persuasive argument for treating Monday and Tuesday as both happening simultaneously, that it is possible to treat them like this. But you haven’t shown that they definitely can’t be treated differently.
I’ve shown that this is the default way to deal with this problem according to probability theory as it is, without making any extra assumptions out of nowhere. That these assumptions, that people for some bizarre reason really like to make are generally not true and are not based on anything of a substance. This is a much stronger claim.
Are you saying that probability of Heads is in principle not defined before the Beauty awakens in the experiment? Or just that it can’t be defined if we assume that Elga’s model is true? Because if it’s the latter—it’s not a point in favor of Elga’s model.
It can’t be usefully defined if we assume that Elga’s model is true. I agree that it is not a point in favor. Doesn’t mean we can’t use it instead of assuming it is true.
Try rigorously specifying the event “today is Monday” in the Sleeping Beauty problem.
What do you mean by “rigorously”? “Rigorously” as “using probability theory” it is specified as Monday in Elga’s model. “Rigorously” as “connected to reality” today is specified as Monday on physical Monday, and Tuesday on physical Tuesday.
It’s just the definition of the problem, that when the coin is Tails first Monday awakening and then Tuesday awakening happens. We do not have a disagreement here, do we?
We do! You are using wrong “happens” and “then” in the definition—the actual definition uses words connected to reality, not parts of probability theory. It’s not a theorem of probability theory, that if event physically happens, it has P > 0. And “awakening happens” is not even directly represented in Elga’s model.
Yes, it’s all unreasonable pedantry, but you are just all like “Math! Math!”.
The main question of the Sleeping Beauty problem is what her credence for Heads should be when she is awakened, while participating in the experiment. This is the question my model is answering. People just mistakenly assume that it means “What is you credence specifically today”, because they think that “today” is a coherent variable in Sleeping Beauty, while it’s not.
On wiki it’s “When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?”—notice the “ought”^^. And the point is mostly that humans have selfish preferences.
I suppose you mean something else by “subjective experience of Beauty”?
Nah, I was just wrong. But… Ugh, I’m not sure about this part. First of all, Elga’s model doesn’t have “Beauty awakened on Monday” or whatever you simulate—how do you compare statistics with different outcomes? And what would happen, if Beauty performed simulation instead of you? I think then Elga’s model would be statistically closest, right? Also what if we tell Beauty what day it is after she tells her credence—would you then change your simulation to have 1⁄3 Heads?
I’ve already explained what does Elga’s model does wrong—it’s talking about a random awakening in Sleeping Beauty. So if we think that it is correct for a *current awakening *we have to smuggle the notion that current awakening is random, which isn’t specified in the condition of the experiment. Which may give you an impression that you learn more, but that’s because you’ve unlawfully assumed a thing.
No, that’s the point—it means they are using different definitions of knowledge. You can use Elga’s model without assuming randomness of an awakening, whatever that means. You’ll need preferred definition of knowledge instead, but everyone already has preferences.
I’ve shown that this is the default way to deal with this problem according to probability theory as it is, without making any extra assumptions out of nowhere.
“Default” doesn’t mean “better”—if extra assumptions give you what you want, then it’s better to make more assumptions.
It can’t be usefully defined if we assume that Elga’s model is true. I agree that it is not a point in favor. Doesn’t mean we can’t use it instead of assuming it is true.
No disagreement here, then. Indeed, we can use wrong models as some form of approximations, we just have to be aware of the fact that they are wrong and not insist on their results when they contradict results of correct models.
What do you mean by “rigorously”?
As in what you mean by “today” in logical terms. I gave you a very good example with how it’s done with No-Coin-Toss and Single-Awakening problems.
Yes, it’s all unreasonable pedantry, but you are just all like “Math! Math!”.
I do not demand from Elga’s model anything my model doesn’t do. I’m not using more vague language while describing my model, that the one I used while describing Elga’s.
You, on the other hand, in attempt to defend it, suddenly pretend that you don’t know what “event happens” means and demand to formally prove that events that happen have probability more than zero. We can theoretically go this route. Wikipedia’s article for probability space covers the basics. But do you really want to loose more time on obvious things that we do not actually disagree about?
On wiki it’s “When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?”—notice the “ought”^^. And the point is mostly that humans have selfish preferences.
First awakened? Then even Elga’s model agrees that P(Heads|Monday)=1/2
No, the question is about how she is supposed to reason anytime she is awakened, not just the first one.
Nah, I was just wrong. But… Ugh, I’m not sure about this part.
Thank you for noticing it. I’d recommend to take some time and reflect on the new evidence that you didn’t expect.
First of all, Elga’s model doesn’t have “Beauty awakened on Monday” or whatever you simulate
What else does event “Monday” that has 2⁄3 probability means then? According to Elga’s model there are three mutually exclusive outcomes: Heads&Monday, Tails&Monday Tails&Tuesday, corresponding to three possible awakening states of the Beauty. What do you disagree with here?
And what would happen, if Beauty performed simulation instead of you? I think then Elga’s model would be statistically closest, right?
I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.
Also what if we tell Beauty what day it is after she tells her credence—would you then change your simulation to have 1⁄3 Heads?
Why would it? The simulation shows which awakening the Beauty is going through on a repetition of an experiment as it described, so that we could investigate the statistical properties of these awakenings.
No, that’s the point—it means they are using different definitions of knowledge.
How is definition of knowledge relevant to probability theory? I suppose, if someone redefines “knowledge” as “being wrong” then yes, in such definition the Beauty should not accept the correct model, but why would we do it?
“Default” doesn’t mean “better”—if extra assumptions give you what you want, then it’s better to make more assumptions.
It means doesn’t require any further justifications. You are free to make any other assumptions if you managed to justify them—the burden of proof is on you. As I’m pointing in the post, no one managed to justify all this “centered worlds” kind of reasoning thus we ought to discard it until it is formally proved to be applicable to probability theory.
What else does event “Monday” that has 2⁄3 probability means then?
It means “today is Monday”.
I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.
I mean what will happen, if Beauty runs the same code? Like you said, “any person”—what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?
Why would it?
My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.
How is definition of knowledge relevant to probability theory? I suppose, if someone redefines “knowledge” as “being wrong” then yes, in such definition the Beauty should not accept the correct model, but why would we do it?
The point of using probability theory is to be right. That’s why your simulations have persuasive power. But different definition of knowledge may value average knowledge of awake moments of Beauty instead of knowledge of outside observer.
And Beauty is awakened, because all the outcomes represent Beauty’s awakened states. Which is “Beauty is awakened today which is Monday” or simply “Beauty is awakened on Monday” just as I was saying.
I mean what will happen, if Beauty runs the same code? Like you said, “any person”—what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?
Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.
My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.
Simulation already reproduces that. Only 1⁄3 of the elements of the list are Heads&Monday. You should probably try running the code yourself to see how it works, because I have a feeling that you are missing something.
Oh, right, I missed that your simulation has 1⁄3 Heads. Thank you for your patient cooperation in finding mistakes in your arguments, by the way. So, why is it ok for a simulation of an outcome with 1⁄2 probability to have 1⁄3 frequency? That sounds like more serious failure of statistical test.
Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.
I imagined that the Beauty would sample just once. And then if we combine all samples into list, we will see that if the Beauty uses your model, then the list will fail the “have the correct number of days” test.
Which is “Beauty is awakened today which is Monday” or simply “Beauty is awakened on Monday” just as I was saying.
They are not the same thing? The first one is false on Tuesday.
(I’m also interested in your thoughts about copies in another thread).
So, why is it ok for a simulation of an outcome with 1⁄2 probability to have 1⁄3 frequency?
There are only two outcomes and both of them have 1⁄2 probability and 1⁄2 frequency. The code saves awakenings in the list, not outcomes
People mistakenly assume that three awakenings mean three elementary outcomes. But as the simulation shows, there is order between awakenings and so they can’t be treated as individual outcomes. Tails&Monday and Tails&Tuesday awakenings are parts of the same outcome.
If this still doesn’t feel obvious, consider this. You have a list of Heads and Tails. And you need to distinguish between two hypothesis. Either the coin is unfair and P(Tails)=2/3, or the coin is fair but whenever it came Tails, the outcome was written twice in the list, while for Heads—only once. You check whether outcomes are randomly spread or pairs of Tails follow together. In the second case, even though the frequency of Tails in the list is twice as high as Heads, P(Tails)=P(Heads)=1/2.
Are you saying that probability of Heads is in principle not defined before the Beauty awakens in the experiment? Or just that it can’t be defined if we assume that Elga’s model is true? Because if it’s the latter—it’s not a point in favor of Elga’s model.
Because, such event doesn’t have a well defined probability in the setting of Sleeping Beauty. I’ve shown it in Math vs Intuition section, but probably this wasn’t clear enough. Lets walk through it once more.
Try rigorously specifying the event “today is Monday” in the Sleeping Beauty problem. What does “today” mean?
For example, in No-Coin-Toss problem it means Monday xor Tuesday or, in other words it’s a variable from the set: today ∊ {Monday, Tuesday}. But in Sleeping Beauty we can’t define “today” this way, because on Tails both Monday and Tuesday happen. The variable “today” has to take two different values during the experiment.
Or you may define today as Monday or Tuesday. But then the event “today is Monday” always happens and P(Monday)=1.
The main question of the Sleeping Beauty problem is what her credence for Heads should be when she is awakened, while participating in the experiment. This is the question my model is answering. People just mistakenly assume that it means “What is you credence specifically today”, because they think that “today” is a coherent variable in Sleeping Beauty, while it’s not.
We are simulating only subjective experience of the Beauty. We are not adding to the list Heads&Tuesday, during which the Beauty is asleep, for instance, only the states when she is awake and thus able to subjectively experience whatever is going on. And these subjective experiences still exist in the setting of the experiment, where Tuesday follows Monday.
I suppose you mean something else by “subjective experience of Beauty”? What is it? Are you under the impression that Beauty subjectively experiences her awakening in random order due to the amnesia? I deal with this argument in the Effects of the Amnesia section.
It’s just the definition of the problem, that when the coin is Tails first Monday awakening and then Tuesday awakening happens. We do not have a disagreement here, do we? The crux then has to be that you believe that probability theory allows for events which are simultaneously sequential and mutually exclusive.
So let’s prove a simple lemma: If two event are sequential they are not mutually exclusive.
Let A1 and A2 be two sequential events so that A2 happens always and only after A1.
Now, suppose they are also mutually exclusive. Then we can construct a sample space Ω={A1, A2} and a probability space (Ω,F,P), satisfying the axioms of probability theory.
By Kolmogorov’s third axiom:
P(A1) + P(A2) = 1
There are two possibilities, either P(A1) = 0 or P(A1) > 0
if P(A1) = 0 then P(A2) =1, but this contradicts the initial premise that A2 happens only after A1.
so P(A1)> 0 and P(A2) < 1
But then with 1-P(A2) probability A2 will not happen after A1 has happened. This contradicts the initial premise that A2 always happens after A1.
Q.E.D.
No, the abstract talk about weirdness was in the previous post where I was just hinting at things and prompting the readers to notice their own confusion and thus derive the right answer on their own.
Here the situation is quite different. We can now directly observe that models from the previous post produce series of outcomes statistically different from Sleeping Beauty conditions and treat sequential events as mutually exclusive, due to completely unfounded assumption of some philosopher.
If two models are claiming that you get different amount of knowledge while observing the same data one is definitely wrong. I’ve already explained what does Elga’s model does wrong—it’s talking about a random awakening in Sleeping Beauty. So if we think that it is correct for a current awakening we have to smuggle the notion that current awakening is random, which isn’t specified in the condition of the experiment. Which may give you an impression that you learn more, but that’s because you’ve unlawfully assumed a thing.
I’ve shown that this is the default way to deal with this problem according to probability theory as it is, without making any extra assumptions out of nowhere. That these assumptions, that people for some bizarre reason really like to make are generally not true and are not based on anything of a substance. This is a much stronger claim.
It can’t be usefully defined if we assume that Elga’s model is true. I agree that it is not a point in favor. Doesn’t mean we can’t use it instead of assuming it is true.
What do you mean by “rigorously”? “Rigorously” as “using probability theory” it is specified as Monday in Elga’s model. “Rigorously” as “connected to reality” today is specified as Monday on physical Monday, and Tuesday on physical Tuesday.
We do! You are using wrong “happens” and “then” in the definition—the actual definition uses words connected to reality, not parts of probability theory. It’s not a theorem of probability theory, that if event physically happens, it has P > 0. And “awakening happens” is not even directly represented in Elga’s model.
Yes, it’s all unreasonable pedantry, but you are just all like “Math! Math!”.
On wiki it’s “When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?”—notice the “ought”^^. And the point is mostly that humans have selfish preferences.
Nah, I was just wrong. But… Ugh, I’m not sure about this part. First of all, Elga’s model doesn’t have “Beauty awakened on Monday” or whatever you simulate—how do you compare statistics with different outcomes? And what would happen, if Beauty performed simulation instead of you? I think then Elga’s model would be statistically closest, right? Also what if we tell Beauty what day it is after she tells her credence—would you then change your simulation to have 1⁄3 Heads?
No, that’s the point—it means they are using different definitions of knowledge. You can use Elga’s model without assuming randomness of an awakening, whatever that means. You’ll need preferred definition of knowledge instead, but everyone already has preferences.
“Default” doesn’t mean “better”—if extra assumptions give you what you want, then it’s better to make more assumptions.
No disagreement here, then. Indeed, we can use wrong models as some form of approximations, we just have to be aware of the fact that they are wrong and not insist on their results when they contradict results of correct models.
As in what you mean by “today” in logical terms. I gave you a very good example with how it’s done with No-Coin-Toss and Single-Awakening problems.
It’s not unreasonable pedantry. It’s an isolated demand for rigor from your part.
I do not demand from Elga’s model anything my model doesn’t do. I’m not using more vague language while describing my model, that the one I used while describing Elga’s.
You, on the other hand, in attempt to defend it, suddenly pretend that you don’t know what “event happens” means and demand to formally prove that events that happen have probability more than zero. We can theoretically go this route. Wikipedia’s article for probability space covers the basics. But do you really want to loose more time on obvious things that we do not actually disagree about?
First awakened? Then even Elga’s model agrees that P(Heads|Monday)=1/2
No, the question is about how she is supposed to reason anytime she is awakened, not just the first one.
Thank you for noticing it. I’d recommend to take some time and reflect on the new evidence that you didn’t expect.
What else does event “Monday” that has 2⁄3 probability means then? According to Elga’s model there are three mutually exclusive outcomes: Heads&Monday, Tails&Monday Tails&Tuesday, corresponding to three possible awakening states of the Beauty. What do you disagree with here?
I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.
Why would it? The simulation shows which awakening the Beauty is going through on a repetition of an experiment as it described, so that we could investigate the statistical properties of these awakenings.
How is definition of knowledge relevant to probability theory? I suppose, if someone redefines “knowledge” as “being wrong” then yes, in such definition the Beauty should not accept the correct model, but why would we do it?
It means doesn’t require any further justifications. You are free to make any other assumptions if you managed to justify them—the burden of proof is on you. As I’m pointing in the post, no one managed to justify all this “centered worlds” kind of reasoning thus we ought to discard it until it is formally proved to be applicable to probability theory.
It means “today is Monday”.
I mean what will happen, if Beauty runs the same code? Like you said, “any person”—what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?
My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.
The point of using probability theory is to be right. That’s why your simulations have persuasive power. But different definition of knowledge may value average knowledge of awake moments of Beauty instead of knowledge of outside observer.
And Beauty is awakened, because all the outcomes represent Beauty’s awakened states. Which is “Beauty is awakened today which is Monday” or simply “Beauty is awakened on Monday” just as I was saying.
Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.
Simulation already reproduces that. Only 1⁄3 of the elements of the list are Heads&Monday. You should probably try running the code yourself to see how it works, because I have a feeling that you are missing something.
Oh, right, I missed that your simulation has 1⁄3 Heads. Thank you for your patient cooperation in finding mistakes in your arguments, by the way. So, why is it ok for a simulation of an outcome with 1⁄2 probability to have 1⁄3 frequency? That sounds like more serious failure of statistical test.
I imagined that the Beauty would sample just once. And then if we combine all samples into list, we will see that if the Beauty uses your model, then the list will fail the “have the correct number of days” test.
They are not the same thing? The first one is false on Tuesday.
(I’m also interested in your thoughts about copies in another thread).
There are only two outcomes and both of them have 1⁄2 probability and 1⁄2 frequency. The code saves awakenings in the list, not outcomes
People mistakenly assume that three awakenings mean three elementary outcomes. But as the simulation shows, there is order between awakenings and so they can’t be treated as individual outcomes. Tails&Monday and Tails&Tuesday awakenings are parts of the same outcome.
If this still doesn’t feel obvious, consider this. You have a list of Heads and Tails. And you need to distinguish between two hypothesis. Either the coin is unfair and P(Tails)=2/3, or the coin is fair but whenever it came Tails, the outcome was written twice in the list, while for Heads—only once. You check whether outcomes are randomly spread or pairs of Tails follow together. In the second case, even though the frequency of Tails in the list is twice as high as Heads, P(Tails)=P(Heads)=1/2.