In the sleeping beauty problem, whether the 2⁄3 or 1⁄2 is “right” is just a debate about words. The real issue is what kind of many-instance decision algorithm you are running.
Not quite. The question of what do we mean by probability in this case is valid, but probability shouldn’t be just about bets. Probability is bound to a specific model of the situation, with sample space, probability measure, and events. The concept of “probability” doesn’t just mean “the password you use to win bets to your satisfaction”. Of course this depends on your ontological assumptions, but usually we are safe with a “possible worlds” model.
It is for making decisions, specifically for expressing preference under the expected utility axioms and where uniform distribution is suggested by indifference to moral value of a set of outcomes and absence of prior knowledge about the outcomes. Preference is usually expressed about sets of possible worlds, and I don’t see how you can construct a natural sample space out of possible worlds for the answer of 2⁄3.
Of course that’s the obvious answer, but it also has some problems that don’t seem easily redeemable. The sample space has to reflect the outcome of one’s actions in the world on which preference is defined, which usually means the set of possible worlds. “Experience-moments” are not carved the right way (not mutually exclusive, can’t update on observations, etc.)
“Experience-moments” are not carved the right way (not mutually exclusive, can’t update on observations, etc.)
Experience moments are “mutually exclusive”, in the sense that every experience moment can be uniquely identified in theory, and any given agent at any given time is only having one specific observer moment. However there is the possibility of subjectively indistinguishable experiences. I don’t understand what you mean by “can’t update”.
By “can’t update” I refer to the problem with marking Thursday “impossible”, since you’ll encounter Thursday later.
However there is the possibility of subjectively indistinguishable experiences.
It’s not a problem with the model of ontology and preference, it’s merely specifics of what kinds of observation events are expected.
Experience moments are “mutually exclusive”, in the sense that every experience moment can be uniquely identified in theory, and any given agent at any given time is only having one specific observer moment.
If the goal is to identify an event corresponding to observations in the form of a set of possible worlds, and there are different-looking observations that could correspond to the same event (e.g. observed at different time in the same possible world), their difference is pure logical uncertainty. They differ, but only in the same sense as 2+2 and (7-5)*(9-7) differ, where you need but to compute denotation: the agent running on the described model doesn’t care about the difference, indeed wants to factor it out.
What happens when you’ve observed that “it’s not Tuesday”, and the next day it’s Tuesday? Have you encountered an event of zero probability?
If you update on your knowledge that “it’s not Tuesday”, it means that you’ve thrown away the parts of your sample space that contain the territory corresponding to Tuesday, marked them impossible, no longer part of what you can think about, what you can expect to observe again (interpret as implied by observations). Assuming the model is honest, that you really do conceptualize the world through that model, your mind is now blind to the possibility of Tuesday. Come Tuesday, you’ll be able to understand your observations in any way but as implying that it’s Tuesday, or that the events you observe are the ones that could possibly occur on Tuesday.
This is not a way to treat your mind. (But then again, I’m probably being too direct in applying the consequences of really believing what is being suggested, as in the case of Pascal’s Wager, for it to reflect the problem statement you consider.)
I don’t see how this is related to the problem of observer-moments—the argument above holds for any event X: “What if you’ve observed ~X, and then you find that X”. What’s the connection?
In a probability space where you have distinct (non-intersecting) “Monday” and “Tuesday”, it is expected (in the informal sense, outside the broken model) that you’ll observe Tuesday after observing Monday, that upon observing Monday you rule out Tuesday, and that upon observing Tuesday you won’t be able to recognize it as such because it’s already ruled out. “Observer-moments” can be located on the same history, and a probability space that distinguishes them will tear down your understanding of the other observer-moments once you’ve observed one of them and excluded the rest. This model promises you a map disconnected from reality.
It is not the case with a probability space based on possible worlds that after concluding ~X, you expect (in the informal sense) to observe X after that. Possible worlds model is in accordance with this (informal) axiom. Sample space based on “observer-moments” is not.
Not quite. The question of what do we mean by probability in this case is valid, but probability shouldn’t be just about bets. Probability is bound to a specific model of the situation, with sample space, probability measure, and events. The concept of “probability” doesn’t just mean “the password you use to win bets to your satisfaction”. Of course this depends on your ontological assumptions, but usually we are safe with a “possible worlds” model.
I’d like to hear what you and Wei Dai discuss that one further; I was taken with Wei’s insight that probability is for making decisions.…
It is for making decisions, specifically for expressing preference under the expected utility axioms and where uniform distribution is suggested by indifference to moral value of a set of outcomes and absence of prior knowledge about the outcomes. Preference is usually expressed about sets of possible worlds, and I don’t see how you can construct a natural sample space out of possible worlds for the answer of 2⁄3.
The sample space would be the three-element set {monday-tails, monday-heads, tuesday-tails} of possible sleeping beauty experience-moments.
Of course that’s the obvious answer, but it also has some problems that don’t seem easily redeemable. The sample space has to reflect the outcome of one’s actions in the world on which preference is defined, which usually means the set of possible worlds. “Experience-moments” are not carved the right way (not mutually exclusive, can’t update on observations, etc.)
Experience moments are “mutually exclusive”, in the sense that every experience moment can be uniquely identified in theory, and any given agent at any given time is only having one specific observer moment. However there is the possibility of subjectively indistinguishable experiences. I don’t understand what you mean by “can’t update”.
By “can’t update” I refer to the problem with marking Thursday “impossible”, since you’ll encounter Thursday later.
It’s not a problem with the model of ontology and preference, it’s merely specifics of what kinds of observation events are expected.
If the goal is to identify an event corresponding to observations in the form of a set of possible worlds, and there are different-looking observations that could correspond to the same event (e.g. observed at different time in the same possible world), their difference is pure logical uncertainty. They differ, but only in the same sense as 2+2 and (7-5)*(9-7) differ, where you need but to compute denotation: the agent running on the described model doesn’t care about the difference, indeed wants to factor it out.
Sorry, I don’t know of this problem. I thought that the days in this example were Monday and Tuesday—what’s going on with Thursday?
I humbly apologize for my inability to read (may the Values of Less Wrong be merciful).
Ah, OK. But I still don’t understand this:
Hmm, my argument is summarized in this phrase:
If you update on your knowledge that “it’s not Tuesday”, it means that you’ve thrown away the parts of your sample space that contain the territory corresponding to Tuesday, marked them impossible, no longer part of what you can think about, what you can expect to observe again (interpret as implied by observations). Assuming the model is honest, that you really do conceptualize the world through that model, your mind is now blind to the possibility of Tuesday. Come Tuesday, you’ll be able to understand your observations in any way but as implying that it’s Tuesday, or that the events you observe are the ones that could possibly occur on Tuesday.
This is not a way to treat your mind. (But then again, I’m probably being too direct in applying the consequences of really believing what is being suggested, as in the case of Pascal’s Wager, for it to reflect the problem statement you consider.)
I don’t see how this is related to the problem of observer-moments—the argument above holds for any event X: “What if you’ve observed ~X, and then you find that X”. What’s the connection?
In a probability space where you have distinct (non-intersecting) “Monday” and “Tuesday”, it is expected (in the informal sense, outside the broken model) that you’ll observe Tuesday after observing Monday, that upon observing Monday you rule out Tuesday, and that upon observing Tuesday you won’t be able to recognize it as such because it’s already ruled out. “Observer-moments” can be located on the same history, and a probability space that distinguishes them will tear down your understanding of the other observer-moments once you’ve observed one of them and excluded the rest. This model promises you a map disconnected from reality.
It is not the case with a probability space based on possible worlds that after concluding ~X, you expect (in the informal sense) to observe X after that. Possible worlds model is in accordance with this (informal) axiom. Sample space based on “observer-moments” is not.