The issue is that people read such explanations, nod in agreement and then do not really change their initial position on the question.
For what it’s worth, I changed my position from “it’s confusing, both answers seem to be inferred by reasonable steps” to “it depends on what you are trying to do” and stayed there without reverting to any single answer.
But here it works as a curiosity stopper to hide a valid mathematical problem.
If you try to minimize all curiosity stoppers, you become philosopher. I don’t mind inventing some additional math and discussing it—it may even be useful in some broad range of cases. But if the original problem is undefined, then stating it is progress that shouldn’t be undone.
We are talking about “probability”—a mathematical concept with a quite precise definition.
Yeah, that’s the problem—specific probabilities are not defined, they depend on arbitrary division of outcomes.
And, of course, everything should be justified in terms of probability theory, not vague philosophical concepts.
Probability theory does not specify an algorithm of translating english to outcome space.
I don’t mind inventing some additional math and discussing it—it may even be useful in some broad range of cases. But if the original problem is undefined, then stating it is progress that shouldn’t be undone.
Completely agree. The thing is, there is enough to discuss even without inventing any additional math. Apparently people were so eager to invent something new and exotic, that they didn’t make sure they actually comply to the basic stuff.
Yeah, that’s the problem—specific probabilities are not defined, they depend on arbitrary division of outcomes.
So I thought. And then I tried to actually check and now I have enough material for several posts specifically about Sleeping Beauty.
Probability theory does not specify an algorithm of translating english to outcome space.
Yep, this is indeed a problem.
However, as math has a property of conserving truth statements, we can notice when we made a mistake. When our model produces paradoxical results its a very good hint that we’ve made some wrong assumption while modeling a problem. My next post is going to be about it.
For what it’s worth, I changed my position from “it’s confusing, both answers seem to be inferred by reasonable steps” to “it depends on what you are trying to do” and stayed there without reverting to any single answer.
If you try to minimize all curiosity stoppers, you become philosopher. I don’t mind inventing some additional math and discussing it—it may even be useful in some broad range of cases. But if the original problem is undefined, then stating it is progress that shouldn’t be undone.
Yeah, that’s the problem—specific probabilities are not defined, they depend on arbitrary division of outcomes.
Probability theory does not specify an algorithm of translating english to outcome space.
Completely agree. The thing is, there is enough to discuss even without inventing any additional math. Apparently people were so eager to invent something new and exotic, that they didn’t make sure they actually comply to the basic stuff.
So I thought. And then I tried to actually check and now I have enough material for several posts specifically about Sleeping Beauty.
Yep, this is indeed a problem.
However, as math has a property of conserving truth statements, we can notice when we made a mistake. When our model produces paradoxical results its a very good hint that we’ve made some wrong assumption while modeling a problem. My next post is going to be about it.