It can’t be usefully defined if we assume that Elga’s model is true. I agree that it is not a point in favor. Doesn’t mean we can’t use it instead of assuming it is true.
No disagreement here, then. Indeed, we can use wrong models as some form of approximations, we just have to be aware of the fact that they are wrong and not insist on their results when they contradict results of correct models.
What do you mean by “rigorously”?
As in what you mean by “today” in logical terms. I gave you a very good example with how it’s done with No-Coin-Toss and Single-Awakening problems.
Yes, it’s all unreasonable pedantry, but you are just all like “Math! Math!”.
I do not demand from Elga’s model anything my model doesn’t do. I’m not using more vague language while describing my model, that the one I used while describing Elga’s.
You, on the other hand, in attempt to defend it, suddenly pretend that you don’t know what “event happens” means and demand to formally prove that events that happen have probability more than zero. We can theoretically go this route. Wikipedia’s article for probability space covers the basics. But do you really want to loose more time on obvious things that we do not actually disagree about?
On wiki it’s “When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?”—notice the “ought”^^. And the point is mostly that humans have selfish preferences.
First awakened? Then even Elga’s model agrees that P(Heads|Monday)=1/2
No, the question is about how she is supposed to reason anytime she is awakened, not just the first one.
Nah, I was just wrong. But… Ugh, I’m not sure about this part.
Thank you for noticing it. I’d recommend to take some time and reflect on the new evidence that you didn’t expect.
First of all, Elga’s model doesn’t have “Beauty awakened on Monday” or whatever you simulate
What else does event “Monday” that has 2⁄3 probability means then? According to Elga’s model there are three mutually exclusive outcomes: Heads&Monday, Tails&Monday Tails&Tuesday, corresponding to three possible awakening states of the Beauty. What do you disagree with here?
And what would happen, if Beauty performed simulation instead of you? I think then Elga’s model would be statistically closest, right?
I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.
Also what if we tell Beauty what day it is after she tells her credence—would you then change your simulation to have 1⁄3 Heads?
Why would it? The simulation shows which awakening the Beauty is going through on a repetition of an experiment as it described, so that we could investigate the statistical properties of these awakenings.
No, that’s the point—it means they are using different definitions of knowledge.
How is definition of knowledge relevant to probability theory? I suppose, if someone redefines “knowledge” as “being wrong” then yes, in such definition the Beauty should not accept the correct model, but why would we do it?
“Default” doesn’t mean “better”—if extra assumptions give you what you want, then it’s better to make more assumptions.
It means doesn’t require any further justifications. You are free to make any other assumptions if you managed to justify them—the burden of proof is on you. As I’m pointing in the post, no one managed to justify all this “centered worlds” kind of reasoning thus we ought to discard it until it is formally proved to be applicable to probability theory.
What else does event “Monday” that has 2⁄3 probability means then?
It means “today is Monday”.
I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.
I mean what will happen, if Beauty runs the same code? Like you said, “any person”—what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?
Why would it?
My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.
How is definition of knowledge relevant to probability theory? I suppose, if someone redefines “knowledge” as “being wrong” then yes, in such definition the Beauty should not accept the correct model, but why would we do it?
The point of using probability theory is to be right. That’s why your simulations have persuasive power. But different definition of knowledge may value average knowledge of awake moments of Beauty instead of knowledge of outside observer.
And Beauty is awakened, because all the outcomes represent Beauty’s awakened states. Which is “Beauty is awakened today which is Monday” or simply “Beauty is awakened on Monday” just as I was saying.
I mean what will happen, if Beauty runs the same code? Like you said, “any person”—what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?
Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.
My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.
Simulation already reproduces that. Only 1⁄3 of the elements of the list are Heads&Monday. You should probably try running the code yourself to see how it works, because I have a feeling that you are missing something.
Oh, right, I missed that your simulation has 1⁄3 Heads. Thank you for your patient cooperation in finding mistakes in your arguments, by the way. So, why is it ok for a simulation of an outcome with 1⁄2 probability to have 1⁄3 frequency? That sounds like more serious failure of statistical test.
Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.
I imagined that the Beauty would sample just once. And then if we combine all samples into list, we will see that if the Beauty uses your model, then the list will fail the “have the correct number of days” test.
Which is “Beauty is awakened today which is Monday” or simply “Beauty is awakened on Monday” just as I was saying.
They are not the same thing? The first one is false on Tuesday.
(I’m also interested in your thoughts about copies in another thread).
So, why is it ok for a simulation of an outcome with 1⁄2 probability to have 1⁄3 frequency?
There are only two outcomes and both of them have 1⁄2 probability and 1⁄2 frequency. The code saves awakenings in the list, not outcomes
People mistakenly assume that three awakenings mean three elementary outcomes. But as the simulation shows, there is order between awakenings and so they can’t be treated as individual outcomes. Tails&Monday and Tails&Tuesday awakenings are parts of the same outcome.
If this still doesn’t feel obvious, consider this. You have a list of Heads and Tails. And you need to distinguish between two hypothesis. Either the coin is unfair and P(Tails)=2/3, or the coin is fair but whenever it came Tails, the outcome was written twice in the list, while for Heads—only once. You check whether outcomes are randomly spread or pairs of Tails follow together. In the second case, even though the frequency of Tails in the list is twice as high as Heads, P(Tails)=P(Heads)=1/2.
No disagreement here, then. Indeed, we can use wrong models as some form of approximations, we just have to be aware of the fact that they are wrong and not insist on their results when they contradict results of correct models.
As in what you mean by “today” in logical terms. I gave you a very good example with how it’s done with No-Coin-Toss and Single-Awakening problems.
It’s not unreasonable pedantry. It’s an isolated demand for rigor from your part.
I do not demand from Elga’s model anything my model doesn’t do. I’m not using more vague language while describing my model, that the one I used while describing Elga’s.
You, on the other hand, in attempt to defend it, suddenly pretend that you don’t know what “event happens” means and demand to formally prove that events that happen have probability more than zero. We can theoretically go this route. Wikipedia’s article for probability space covers the basics. But do you really want to loose more time on obvious things that we do not actually disagree about?
First awakened? Then even Elga’s model agrees that P(Heads|Monday)=1/2
No, the question is about how she is supposed to reason anytime she is awakened, not just the first one.
Thank you for noticing it. I’d recommend to take some time and reflect on the new evidence that you didn’t expect.
What else does event “Monday” that has 2⁄3 probability means then? According to Elga’s model there are three mutually exclusive outcomes: Heads&Monday, Tails&Monday Tails&Tuesday, corresponding to three possible awakening states of the Beauty. What do you disagree with here?
I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.
Why would it? The simulation shows which awakening the Beauty is going through on a repetition of an experiment as it described, so that we could investigate the statistical properties of these awakenings.
How is definition of knowledge relevant to probability theory? I suppose, if someone redefines “knowledge” as “being wrong” then yes, in such definition the Beauty should not accept the correct model, but why would we do it?
It means doesn’t require any further justifications. You are free to make any other assumptions if you managed to justify them—the burden of proof is on you. As I’m pointing in the post, no one managed to justify all this “centered worlds” kind of reasoning thus we ought to discard it until it is formally proved to be applicable to probability theory.
It means “today is Monday”.
I mean what will happen, if Beauty runs the same code? Like you said, “any person”—what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?
My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.
The point of using probability theory is to be right. That’s why your simulations have persuasive power. But different definition of knowledge may value average knowledge of awake moments of Beauty instead of knowledge of outside observer.
And Beauty is awakened, because all the outcomes represent Beauty’s awakened states. Which is “Beauty is awakened today which is Monday” or simply “Beauty is awakened on Monday” just as I was saying.
Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.
Simulation already reproduces that. Only 1⁄3 of the elements of the list are Heads&Monday. You should probably try running the code yourself to see how it works, because I have a feeling that you are missing something.
Oh, right, I missed that your simulation has 1⁄3 Heads. Thank you for your patient cooperation in finding mistakes in your arguments, by the way. So, why is it ok for a simulation of an outcome with 1⁄2 probability to have 1⁄3 frequency? That sounds like more serious failure of statistical test.
I imagined that the Beauty would sample just once. And then if we combine all samples into list, we will see that if the Beauty uses your model, then the list will fail the “have the correct number of days” test.
They are not the same thing? The first one is false on Tuesday.
(I’m also interested in your thoughts about copies in another thread).
There are only two outcomes and both of them have 1⁄2 probability and 1⁄2 frequency. The code saves awakenings in the list, not outcomes
People mistakenly assume that three awakenings mean three elementary outcomes. But as the simulation shows, there is order between awakenings and so they can’t be treated as individual outcomes. Tails&Monday and Tails&Tuesday awakenings are parts of the same outcome.
If this still doesn’t feel obvious, consider this. You have a list of Heads and Tails. And you need to distinguish between two hypothesis. Either the coin is unfair and P(Tails)=2/3, or the coin is fair but whenever it came Tails, the outcome was written twice in the list, while for Heads—only once. You check whether outcomes are randomly spread or pairs of Tails follow together. In the second case, even though the frequency of Tails in the list is twice as high as Heads, P(Tails)=P(Heads)=1/2.