Sleeping Beauty Resolved (?) Pt. 2: Identity and Betting
Introduction
This is a followup to my previous article, Sleeping Beauty Resolved? Some objected to the solution I presented (building on Radford Neal’s analysis) on the grounds that it runs afoul of a Thirder betting argument:
If Beauty uses anything other than a probability of exactly for Heads, she will accept certain bets she should not, and reject others she should accept.
Alas, there is an alternative Halfer betting argument that makes the same claim, but replacing with .
I’ll show that both arguments are wrong, as they get the effective payoffs wrong; but if Beauty uses the correct effective payoffs, together with a probability of for Heads, she makes the right betting decisions.
To get there requires addressing some questions of unique identity, so that’s where I’ll start.
Indexicals and Identity
Thirder arguments often make use of statements such as
today is Monday
and
today is Tuesday,
treating them as mutually exclusive propositions. Probability theory is based on the classical propositional logic, and deals exclusively with classical propositions; are the above legitimate classical propositions?
I argue that they are not. The word “today” is problematic; it is an indexical, which the article “Demonstratives and Indicatives” in the Internet Encyclopedia of Philosophy defines as
...any expression whose content varies from one context of use to another. The standard list of indexicals includes adverbs such as “now”, “then”, “today”, “yesterday”, “here”, and “actually”.
The article furthermore remarks,
Indexicals and demonstratives raise interesting technical challenges for logicians seeking to provide formal models of correct reasoning in natural language...
and goes on to discuss various efforts to construct logics appropriate for reasoning with indexicals.
Clearly indexicals pose a problem for classical logic, else there would be no interest in constructing alternative logics to deal with them. As Richard Epstein writes in Classical Mathematical Logic: The Semantic Foundations of Logic,
...we cannot take sentence types as propositions if we allow the use of indexicals in our reasoning
(p. 4). This is because the meaning of a classical proposition must be definite and stable:
When we reason together, we assume that words will continue to be used in the same way… We will assume that throughout any particular discussion equiform words will have the same properties of interest to logic. We therefore identify them and treat them as the same word...
...if we accept this agreement, we must avoid words such as ‘I’, ‘my’, ‘now’, or ‘this’, whose meaning or reference depends on the circumstances of their use. Such words, called indexicals, play an important role in reasoning, yet our demand that words be types requires that they be replaced by words that we can treat as uniform in meaning or reference throughout a discussion.
(p. 3). In short:
Every usage of the same proposition in an argument / analysis must mean the same thing and have the same true/false value in all contexts within the scope of the analysis.
The important point is to ensure that any temporal (or spatial, etc.) reference is uniquely defined. If we are having a face-to-face conversation and I use the word “now,” it’s clear that means the specific, well-defined point in time at which I utter that word. If I refer to “the day on which Julius Caesar took his first sip of wine,” then even though nobody knows what day that was, I have uniquely identified a particular day—there cannot be two such days.
But when Beauty asks, “Is today Monday?”, how does she identify which “today” she means? Can she find some uniquely identifying descriptor that unambiguously distinguishes “today” from “the other day”?
Maybe. If the experimenters randomly choose to put a black marble on her night stand one day, and a white marble the other day, and Beauty knows this, then as soon as she glances at the night stand and sees (say) a black marble, she can then say that “today” means “the day on which there is a black marble on the nightstand.” In this case the usual Thirder arguments hold, and she gets a probability for Heads of .
But if Beauty is an AI and her entire state of mind and stream of experiences from Monday are exactly reproduced on Tuesday, then the term “today” is inescapably ambiguous—Beauty has no way of uniquely identifying “today”. Thirder arguments based on using “today is Monday” and “today is Tuesday” as mutually exclusive propositions are invalid in this case. As shown previously, the Halfer argument of “no new relevant information” applies in this case, and Beauty gets a probability of for Heads.
In the intermediate case, where is the stream of perceptions Beauty has experienced since awakening and , , is the probability of experiencing the identical stream of perceptions at some time on the other day, then “today” is partially identified as “the day in which Beauty experiences stream of perceptions .” (The closer is to zero, the more probable it is that uniquely identifies “today.”) We then get a probability for Heads that is intermediate between the Halfer and Thirder positions: .
If You Already Know What Your Conclusion Will Be...
This issue of unique identity answers a question @travisrm89 asked:
How can receiving a random bit cause Beauty to update her probability, as in the case where Beauty is an AI? If Beauty already knows that she will update her probability no matter what bit she receives, then shouldn’t she already update her probability before receiving the bit?
This question references the special case where Beauty is an AI whose only sensory input after awakening on Monday/Tuesday is a sequence of random bits. I showed that her probability of Heads before receiving any bits is , and after receiving the first bit this falls to , no matter which bit is received. The above argument is also one a Halfer could use against the Thirder position: if Beauty already knows on Sunday that her probability for Heads is going to be on Monday, why isn’t that already her probability for Heads?
To answer this, let’s consider where the principle travisrm89 invokes comes from. Let and be two mutually exclusive and exhaustive propositions, with meaning “the next observation is .” If our probability for updates to regardless of what we observe—that is, if for both—then
which says that our probability for should already be .
But the new information Beauty has after receiving one random bit doesn’t fit the above pattern. In this special case of the problem her new information is
for some , where means “the first bit Beauty receives on day is y”. Significantly, the two propositions and are not mutually exclusive if—both are true if the coin lands Tails and Beauty receives different first bits on Monday and Tuesday. Thus we have an exhaustive but not mutually exclusive set of possibilities, and the sum of their probabilities exceeds 1: .
Therefore,
which is why we can have even though .
It is only in the case of—Beauty’s experiences on the two days are identical so the days are entirely indistinguishable—that the sum of observation probabilities is 1, and the prior and posterior probabilities are the same.
Betting Arguments
I’ll start with the part everybody agrees on:
Suppose that Beauty is offered a bet with a payoff of if the coin lands Heads, and a payoff of if the coin lands Tails. Either of these payoffs can be negative, in which case it is a loss. (The interesting cases are where one is positive and the other negative.) If the coin lands Tails she is offered the bet on both Monday and Tuesday. Since Beauty assesses the same probability of Heads on both Monday and Tuesday, she will either accept the bet both times or reject it both times. Thus the “objective” expected payout if she accepts the bet is
Beauty should accept the bet if, and only if, this quantity is positive. This is the case in the standard example where and .
The standard Thirder betting argument goes like this:
If Beauty assesses a probability for Heads after awakening on Monday/Tuesday, then she computes an expected payout of
and will accept the bet only iff this is positive. If then Beauty’s expected payout is identical to the “objective” expected payout (up to a constant, positive factor) and so she will make the correct decision to accept or reject the bet, whatever the payoffs used.
The Halfer counterargument [reference?] goes like this:
Beauty knows full well that, if the coin lands Tails, she is going to compute the same probability of Heads on both Monday and Tuesday, and that she will therefore make identical decisions on those two days, and obtain identical outcomes. So in that case she is making a decision for two days, not just one. Therefore she should compute her expected payout as
and accept the bet only if this quantity is positive. If then her subjective expected payout is identical to the objective expected payout, and so she will make the correct decision to accept or reject the bet, whatever the payoffs used.
There are elements of truth to both positions. Any decision rule, including the rule that one should maximize expected gain, is a function that maps the available information to a recommended action. In the SB problem the available information is Beauty’s background knowledge about the experimental setup, plus the stream of perceptions she has experienced since awakening. If the same stream of perceptions occurs on both days, Beauty’s decision rule must give the same action both days, and so in this circumstance her payout for Tails is . However, if Beauty does not experience the stream of perceptions on the other day, then in principle her decision rule could give different actions for the two days, and her payout for Tails is just .
Given that Beauty experiences , we then have three cases for three different possible payouts:
A: The coin lands Heads, and Beauty experiences on Monday. Payoff is . Using the notation of Part 1, the prior probability is
B: The coin lands Tails, and Beauty experiences on either Monday or Tuesday, but not both. Payoff is . The prior probability is
C: The coin lands Tails, and Beauty experiences on both Monday and Tuesday. Payoff is . Prior probability is
The sum of these three prior probabilities is
and so their posterior probabilities, given that Beauty experiences , are
A:
B:
C:
Combining the posterior probabilities with the payoffs, we get an expected gain of
which is identical to the objective expected payout, up to a positive, constant factor. So Beauty will make the correct decision to accept or reject the bet, whatever the payoffs used.
- Sleeping Beauty Not Resolved by 19 Jun 2018 4:46 UTC; 18 points) (
- Sleeping Beauty Resolved? by 22 May 2018 14:13 UTC; 16 points) (
- 19 Jun 2018 5:12 UTC; 5 points) 's comment on Sleeping Beauty Resolved? by (
- 3 Jul 2018 2:36 UTC; 1 point) 's comment on Sleeping Beauty Not Resolved by (
- 16 Jul 2018 11:04 UTC; 1 point) 's comment on Sleeping Beauty Resolved? by (
- 3 Jul 2018 1:41 UTC; 1 point) 's comment on Sleeping Beauty Resolved? by (
- 3 Jul 2018 1:33 UTC; 1 point) 's comment on Sleeping Beauty Resolved? by (
I think your criticism that the usual thirder arguments fail to properly apply probability theory misses the mark. The usual thirder arguments correctly avoid using the standard formalization of probability, which was not designed with anthropic reasoning in mind. It is usually taken for granted that the number of copies of you that will be around in the future to observe the results of experiments is fixed at exactly 1, and that there is thus no need to explicitly include observation selection effects in the formalism. Situations in which the number of future copies of you around could be 0, and in which this is correlated with hypotheses you might want to test, do occur in real life, but they are rare enough that they did not influence the development of probability theory. The fact that you can fit anthropic reasoning into a formalism that was not designed to accommodate it by identifying your observation with the existence of at least one observer making that observation, but that there is some difficulty fitting anthropic reasoning into the formalism in a way that weights observations by the number of observers making that observation, is not conclusive evidence that the former is an appropriate thing to do.
Imagine a world where people getting split into multiple copies and copies getting deleted is a daily occurrence, in which most events will correlate with the number of copies of you around in the near future, so that these people would not think it makes sense to assume for simplicity that the hypotheses they are interested in are independent of the number of copies of them that will be around. How would people in this world think about probability. I suspect they would be thirders. Or perhaps they would be halfers, or this would still be a controversial issue for them, or they’d come up with something else entirely. But one thing I am confident they would not do is think about what sources of randomness are available to them that might make different copies have slightly different experiences. This behavior is not useful. Do you disagree that people in this world would be very unlikely to handle anthropic reasoning in the way you suggest? Do you disagree that people in this world are better positioned to think about anthropic reasoning than we are?
I don’t find your rebuttal to travisrm89 convincing. Your response amounts to reiterating that you are identifying observations with the existence of at least one observer making that observation. But each particular observer only sees one bit or the other, and which bit it sees is not correlated with anything interesting, so all the observers finding a random bit shouldn’t do any of them any good.
And I have a related objection. The way of handling anthropic reasoning you suggest is discontinuous, in that it treats two identical copies of an observer much differently from two almost identical copies of an observer, no matter how close the latter two copies get to being identical. In an analog world, this makes no sense. If Sleeping Beauty knows that on Monday, she will see a dot in the exact center of her field of vision, but on Tuesday, if she wakes at all, she will see a dot just slightly to the right of the center of her field of vision, this shouldn’t make any difference if the dot she sees on Tuesday is far too close to the center for her to have any chance of noticing that it is off-center. But there also shouldn’t be some precise nonzero error tolerance that is the cutoff between “the same experience” and “not the same experience” (if there were, “the same” wouldn’t even be transitive). A sharp distinction between identical experiences and similar experiences should not play any role in anthropic reasoning.
1. Logic, including probability theory, is not observer-dependent. Just as the conclusions one can obtain with classical propositional logic depend only on the information (propositional axioms) available, and not on any characteristic or circumstance of the reasoner, epistemic probabilities also depend only on the information available. Logic—including probability theory—was designed to be fully general. If you want to argue that probability theory is not, in its standard formulation, suitable for anthropic reasoning, you need to point out the specific points in its rationale that are incompatible with anthropic effects. As I have shown (preprint), all you have to assume to get probability theory from classical propositional logic is that certain properties of propositional logic are retained in the extended logic.
2. No, neither classical logic nor probability theory as the extension of classical propositional logic assumes anything about observers, or their numbers, or experiments, or what may happen in the future.
3. Selection effects are routinely handled within the framework of standard probability theory. You don’t need to go beyond standard probability theory for this.
Right, probability theory itself makes no mention of observers at all. But the development of probability theory and the way that it is applied in practice were guided by implicit assumptions about observers.
You seemed to argue in your first post that selection effects were not routinely handled within standard probability theory. Unless perhaps you see a significant difference between the selection effect that suggests that the coin has a 1⁄3 chance of having landed heads in the Sleeping Beauty problem and other selection effects? I was attempting to concede for the sake of argument that accounting for selection effects as typically practiced depart from standard probability theory, not advance it as an argument of my own.
Certainly agreed as to logic (which does not include probability theory). As for probability theory, it should not be a priori surprising if a formalism that we had strong intuitive reasons for being very general, in which we made certain implicit assumptions about observers (which do not appear explicitly in the formalism) in these intuitive justifications, turned out not to be so generally applicable in situations in which those implicit assumptions were violated. As for whether probability theory does actually lack generality in this way, I’m going to wait to address that until you clarify what you mean by applying standard probability theory, since you offered a fairly narrow view of what this means in your original post, and seemed to contradict it in your point 3 in the comment. My position is that “the information available” should not be interpreted as simply the existence of at least one agent making the same observations you are, while declining to make any inferences at all about the number of such agents (beyond that it is at least 1). I take no position on whether this position violates “standard probability theory”.
I don’t think that’s true, but even if it is an accurate description of the history, that’s irrelevant—we have justifications for probability theory that make no assumptions whatsoever about observers.
No, I argued that this isn’t a case of selection effects.
Why are you ignoring what I wrote about proofs that probability theory is either a or the uniquely determined extension of classical propositional logic to handle degrees of certainty? That places probability theory squarely in the logical camp. It is a logic.
No, we made no such implicit assumptions. There are no assumptions, implicit or otherwise, about observers at all. If you think otherwise, show me where they occur in Cox’s Theorem or in my theorem.
I have no idea what you’re talking about here.
Um, there’s only one agent here, but if by “agent” you mean the pair (person, day), then the above is just wrong—it’s very clearly part of the model that if the coin comes up Heads, there is exactly one day on which the remembered observations could be made, and if the coin comes up Tails, there are exactly two days on which the remembered observations could be made. I even worked out the probabilities that the observations occurred on just Monday, just Tuesday, or both Monday and Tuesday.
Listen, if you want to argue against my analysis, you need to
1. Propose a different model of what Beauty knows on Sunday, and/or
2. Propose a different proposition that expresses the additional information Beauty has on Monday/Tuesday and that accounts for her altered probabilities. This proposition should be possible to sensibly state and talk about on Sunday, Monday, Tuesday, or Wednesday, by either Beauty or one of the experimenters, and mean the same thing in all these cases.
I’m not sure how people would reason if people were often duplicated. Lots of issues would need to be addressed once that is common. Are two duplicates that have had identical experiences actually different people (and count twice in moral calculations, for instance)? It seems just as reasonable to count duplicates separately only if they have had different experiences. And in most scenarios with ems or AIs, duplication capability would go with the ability to completely control the duplicate’s experiences, making it hard to see how the duplicate can have any justified knowledge of the external world.
But Sleeping Beauty is not a problem with such characteristics. As described, it is only mildly-fantastical, with perfect memory erasure (which needn’t actually be perfect, just very good) the only unusual feature. So it should be possible to solve it using the tools used for reasoning about ordinary situations, and one would expect the answer obtained that way to still be correct according to some hypothetical more general theory of inference that might be devised in the future, just as the answers to problems solved 200 years ago using Newtonian mechanics are still regarded as correct today, despite the subsequent developments of relativity and quantum mechanics.
The questions of whether two duplicates are actually different people and of whether they count twice in moral calculations are different questions, and would likely be answered differently. People often answer these questions differently in the real world: people are usually said to remain the same person over time, but I think if you ask whether it is better to improve the daily quality of life of someone who’s about to die tomorrow, or improve the daily quality of life of someone who will go on to live a long life by the same amount, I think most people would agree that the second one is better because the beneficiary will get more use out of it, despite each time-slice of the beneficiary benefiting just as much in either case. Anyway, I was specifically talking about the extent to which experiences being differentiated would influence the subjective probability beliefs of such people. If they find it useful to assign probabilities in ways that depend on differentiating copies of themselves, this is probably because the extent to which they care about future copies of themselves depends on how those copies are differentiated from each other, and I can’t see why they might decide that the existence of a future copy of them decreases the marginal value of an additional identical copy down to 0 while having no effect on the marginal value of an additional almost identical copy.
Sleeping Beauty may be less fantastical, but it is still fantastical enough that such problems did not influence the development of probability theory. As I said, even testing hypotheses that correlate with how likely you are to survive to see the result of the test are too fantastical to influence the development of probability theory, despite such things actually occurring in real life. My point was that people who see Sleeping Beauty-like problems as a normal part of everyday life would likely have a better perspective on the problem than we do, so it might be worth trying to think from their perspective. The fact that Sleeping Beauty-type problems being normal is more fantastical than a Sleeping Beauty-type problem happening once doesn’t change this.
“My point was that people who see Sleeping Beauty-like problems as a normal part of everyday life would likely have a better perspective on the problem than we do”
Yes, I agree.
“so it might be worth trying to think from their perspective.”
Yes, it might. But I think we shouldn’t expect to be very successful in this attempt. So if trying to do that gives a result that contradicts ordinary reasoning, which really ought to suffice for Sleeping Beauty, then we’re probably not thinking from their perspective very well.
I agree that it is difficult to see things from the perspective of people in such a world, but we should at least be able to think about whether certain hypotheses about how they’d think are plausible. That may still be difficult, but ordinary reasoning is not easy to do reliably in these cases either; if it was, then presumably there would be a consensus on how to address the Sleeping Beauty problem.
Your analysis looks correct to me. But I can’t say I’m sure that it’s correct when either p(y) or q(y) are not close to zero.
That’s because such situations are fantastical, and perhaps even impossible in principle. One general point I make in my anthropic reasoning paper (http://www.cs.utoronto.ca/~radford/anth.abstract.html) is that reasoning about fantastical thought experiments is dangerous—it’s very easy when doing such reasoning to assume common-sense things that are not true given the fantastical premise.
In this regard, the usual Sleeping Beauty problem is only mildly fantastical—it assumes perfect memory erasure. We have ordinary experience of forgetting things, and occassionally we forget a whole episode, when drunk or after a head injury. So it’s not too outside our common experience. And it’s not really necessary to assume PERFECT memory erasure. If Beauty has some small probability of remembering something vague, maybe from Monday, that she takes as slight evidence that it is currently Tuesday, that will have only a slight affect on her probabilities, or on her success at betting.
As I remark in my paper, though, discussion of Sleeping Beauty often slides with little or no comment into a highly fantastical version of the problem in which Beauty’s thoughts and experiences on Tuesday (if she is woken then) are ABSOLUTELY IDENTICAL to her thoughts and experiences on Monday, resulting in q(y)=1,, and also in a certainty that she makes the same decisions on Tuesday as on Monday. It is not clear that this scenario is even possible. The only apparent way to do it would be to clone the state of Beauty’s mind before Monday, and then restore it for Tuesday. But quantum states cannot be cloned, so this is not possible in an absolute sense. It might be possible in some practical sense, but then there will be no absolute guarantee that Beauty will make the same decisions on Monday and Tuesday.
This fantastical scenario also raises other issues, starting with its assumption that the functionalist theory of mind is correct. Although one might not realize it from discussions at places like lesswrong, it has not actually been proven that a Turing-equivalent computer can simulate a mind. Assuming it can, however, we get into the issue that if Beauty is an AI whose inputs can be completely controlled by an outside entity so as to produce identical experiences on Monday and Tuesday, what basis does Beauty have for believing ANYTHING about the external world? I don’t mean to get into a discussion of such issues here, but only to point out that the apparently small change of “let’s just assume for simplicity that Beauty’s experience are the same on Tuesday as on Monday” turns a mildly-fantastical problem that can be analysed with common-sense intuitions into one where deep and controversial philosophical issues cannot be avoided. The highly-fantastical version may well be interesting, but discussion of it should start by admitting that is completely different from the Sleeping Beauty problem as normally stated, in which both p(y) and q(y) are very close to zero, for which I think the Thirder answer is quite definitely correct.
You said “Discussion of Sleeping Beauty often slides [into a] problem in which Beauty’s thoughts and experiences on Tuesday (if she is woken then) are ABSOLUTELY IDENTICAL to her thoughts and experiences on Monday.” No, they don’t. They assume that nothing in the set of experiences can change the assessment of what day it currently is. But that, of course, requires one to recognize that “today” is a valid random variable.
Well, actually, they often do. To quote from my paper:
For example, Lewis (2001, p.171) in describing the problem says, ”… the memory erasure on Monday will make sure that her total evidence at the Tuesday awakening is exactly the same as at the Monday awakening”, and Elga (2000, p.145) says, “We may even suppose that you knew at the start of the experiment exactly what sensory experiences you would have upon being awakened on Monday”, which in the context of the problem would require also assuming that these experiences are identical to those on Tuesday (if one is woken then).
Elga, A. (2000) ``Self-locating belief and the Sleeping Beauty problem″, Analysis, vol.60, pp.143-147.
Lewis, D. (2001) ``Sleeping Beauty: reply to Elga″, Analysis, vol.61, pp.171-176.
Lewis says that the evidence that it is Monday or Tuesday is identical, not the totality of her thoughts and experiences is identical. A window and rain on only one of the days constitutes different experiences, but requires knowledge of the weather forecast to extrapolate that difference into evidence.
The context you omitted from the Elga quote was comparing Sunday’s knowledge to Monday’s, with no mention of Tuesday. He even added a footnote: “To say that an agent receives new information (as I shall use that expression) is to say that the agent receives evidence that rules out possible worlds not already ruled out by her previous evidence.” His point was that she does not receive new information when she is woken on Monday.
But I had two points. First, the problem statement specifically says that Beauty has no evidence. Lewis’ statement is describing this result; how it might be achieved is irrelevant to the thought experiment. Second, any analysis that uses such evidence is treating the propositions “Today is Monday” and “Today as Tuesday” as logically valid statements, even before evidence from her different thoughts and experiences accumulates. So you can’t also claim that they aren’t logically valid.
Distinguishing “total evidence” from “totality of her experiences” is possible only if the way of computing probabilities from experiences is uncontroversial, and has the property that some experiences are irrelevant, but of course this isn’t uncontroversial—that’s the whole point of the problem. Elga may not have mentioned Tuesday, but his assertion that Monday and Tuesday can’t be distinguished together with his assertion that Monday’s experiences are already known on Sunday logically imply that the experiences on Monday and Tuesday are identical. Elga’s footnote implies that whether or not your nose is itching is new information, since an itching nose rules out possible worlds in which your nose does not itch.
When the problem statement “specifically says that Beauty has no evidence”, this does not, of course, mean that the problem statement fixes the probability of heads at 1⁄2 by fiat. That would render the problem completely uninteresting. It of course means that Beauty’s experience on Monday and (if woken) Tuesday do not provide any information that would ordinarily allow her to tell which day it is. It is not meant to convert the problem to one in which Beauty MUST be an AI whose perceptions are controlled by the experimenter so that they are exactly the same on Monday and Tuesday, with a human Beauty ruled out by the problem definition. If that is what Elga meant, surely he would have mentioned that Beauty can’t be human at some point?
Sleeping Beauty purports to be an only mildly-fantastic thought experiment—before discussion slides sometimes in a highly-fantastic, perhaps impossible, direction. Mildly-fantastic thought experiments can be interesting. Highly-fantastic though experiments are less interesting, because it is unclear whether they are possible, and if they are not completely impossbile, what they might say about ordinary experience. Whether the conditions of a thought experiment can be achieved in less than highly-fantastical manner is not at all irrelevant to whether a thought experiment is interesting, or to its possible solution.
My analysis makes no use of propositions such as “Today is Monday”. I don’t know why you keep bringing up this issue.
It is consensus on how one uses experiences as evidence, not the usage itself, that is only possible if the method is uncontroversial. Controversy just means that two people see its applicability differently. Not that it is impossible to use, or that either is correct or incorrect.
But neither Lewis, nor Elga, say anything about using Beauty’s experiences during the day of an awakening as evidence. Elga’s footnote is defining what he means by “new information,” which we are calling “evidence.” He never relates it to experiences during the day, only to experiences inherited from Sunday at the start of a day. So while Beauty’s nose may itch, she has no idea why that itch should be more, or less, likely on Monday than on Tuesday. And how fantastic, or mundane, the experiment is, is completely irrelevant. We are told to assume that nothing affects Beauty’s ability to distinguish Monday from Tuesday.
Removing hyperbole and the double negative from your compound sentence, it seems you said “[How] the conditions of a thought experiment can be achieved is relevant to its possible solution.” I canceled the double negative by changing “is not at all irrelevant” to “is relevant.” If I misinterpreted that, or you misstated your thought, I’m sorry—please correct me. But I disagree completely with that sentiment. A thought experiment is used when its conditions are not easily achievable in the real world, and applies only to the ideal conditions it describes.
I am not discussing your analysis, I am discussing KSVANHORN’s. He says that the use of the word “today” is “problematic.” He argues that evidence that identifies a day with the use of “today” is needed in order for Beauty to use it. This is incorrect. Her “today” refers to the moment when she uses the word, and the fact that she does not know the current moment may not prevent it from having a valid, logical meaning as a random variable with a probability distribution.
The issue in the Sleeping Beauty Problem, is when that prior is evaluated. Halfer’s want it to be evaluated on Sunday Night, where KSVANHORN is correct: “today is Monday” and “today is Tuesday” are not valid propositions, let alone mutually exclusive propositions. My point is that her prior is just before she was awakened, where they are. This leads easily to the answer “1/3.”
But I understand why that may not be easy to accept. That’s why I have presented two alternative, but equivalent, formulations of the problem. They both lead to the inescapable answer that her belief should be 1⁄3.
“neither Lewis, nor Elga, say anything about using Beauty’s experiences during the day of an awakening as evidence”
Yes, they don’t realize that that is relevant information. They’re mistaken. That’s not particularly unusual for philosophers.
“A thought experiment is used when its conditions are not easily achievable in the real world, and applies only to the ideal conditions it describes”
Surely you would agree that a thought experiment is uninteresting if the conditions for it are actually impossible? And don’t you think that it becomes less interesting if the conditions may be impossible (but we’re not sure)? Finally, if the conditions are possible only when in some highly unusual (but let us suppose possible) situation, you have to be very careful to not argue while assuming things that are normally true but are not true in such unusual situations. Situations in which people can anticipate the previous day what all their experiences will be the next day are so unusual that I have no confidence in my ability to reason about what such “people” should believe. I doubt that many people would find the Sleeping Beauty problem interesting if it were made clear that Beauty has to be a “person” of this sort.
I’m not trying to explicate ksvanhorn’s argument. I myself think that there is no need to use words like “today” for this problem. Perhaps it’s OK to do so, but since it isn’t necessary, I think people should just rephrase what they want to say while avoiding the word.
I don’t understand your argument for “1/3”. Of course, I agree that 1⁄3 is the correct answer, in the normal non-very-fantastical version of the problem, but that doesn’t mean that all arguments for 1⁄3 are correct.
“[Elga and Lewis] don’t realize that that is relevant information. They’re mistaken.” They are not. The very premise of the problem is that it cannot be relevant. The same reasoning suggests we don’t need to accept that the coin is fair, or that Beauty might wake on Tuesday after Heads.
“Surely you would agree that a thought experiment is uninteresting if the conditions for it are actually impossible?” I absolutely would not. There is no coin, or a methodology for flipping one, that produces exactly 50%. In fact, if we could achieve the level of detail you try to with Beauty, a coin flip is deterministic.
And the conditions that we assume for nearly all of mathematics are “actually impossible.” There are no dimensionless points, and no two lines have the exact same length. You can even debate whether the numbers “i”, “-7”, “pi,”, or even “23” actually exist. See https://www.quora.com/Does-infinity-exist-If-it-exists-then-what-is-it .
The point of mathematics is to postulate an ideal circumstance, and deduce what happens in that circumstance regardless of whether it is “actually possible.” Even philosophers know this: “In pure mathematics, actual objects in the world of existence will never be in question, but only hypothetical objects having those general properties upon which depends whatever deduction is being considered.” Bertrand Russell, from the preface to Principles of Mathematics, page XLV.
The reason it is interesting, even if you restrict yourself to “actually possible” conditions, is that the “actual” answer is derived from the ideal one.
“I myself think that there is no need to use words like ‘today’ for this problem.” I don’t think it is possible to address it without it, or some substitute that performs the same indexing. And that saying there is no need, is affirming the consequent: If the solution you want to be true does not distinguish the days, then distinguishing the days is unnecessary. Regardless, if they are distinguishable, then we cannot go wrong by distinguishing them.
“I don’t understand your argument for ‘1/3’.”
A capsule of Argument 1: Beauty’s prior is not the state on Sunday, since that state does not describe a measure that can vary over the course of the experiment. It is the state just before she is wakened, which includes a variable for the current day. There are four equiprobable combinations of this variable, and the variable “coin.” One is eliminated because she is awake.
A capsule of a different version of Argument 2 (with Tim and Tom): On Sunday, Beauty places an imaginary, invisible coin that only she can find under her pillow. Since it is invisible, she doesn’t know if it is Heads or Tails. But when she wakes, she can find it, and flip it over. Now there are three random variables: the real coin, Sunday’s value for the imaginary coin, and the current value for the imaginary coin (These last two can be combined into one if you want). The eight possible combinations are again equiprobable, and two are eliminated.
A capsule of Argument 3: Four Beauties are used, based on the same coin flip (it must be flipped before Monday morning). Each will sleep through a different combination of the day and coin. Each is asked for her belief that the coin result is the one where she will sleep through a day.
One of these is in the exact experiment we are debating. The others’ experiments are functionally equivalent. Each has the same answer. If I am one of them, I know (A) that three are currently awake, (B) that one will sleep through a day and two will wake both days, and (C) I have no information about which of the three I am. My belief that I am the one who will sleep can only be 1⁄3.
You’re failing to distinguish between though experiments that are only mildly-fantastic, like ones assuming perfectly fair coins, when real ones have (say) a 50.01% chance of landing heads, versus highly-fantastic thought experiments, such as ones assuming that on Sunday you know exactly, in complete detai, what all your experiences will be on Monday. If such prevision is impossible, then thought experiments assuming it are uninteresting. More to the point, such a fantastic thought experiment is NOT THE SAME as a somewhat simillar thought experiment that makes no such fantastic assumption. Sleeping Beauty purports to be an only-mildly-fantatic thought experiment. If you want to talk about a fantastic Sleeping Beaurty in which the mental properties of Beauty are such that she is a completely different entity than any human could possibly be, then go right ahead. But don’t try to maintain that this is the same as the original Sleeping Beauty problem. Whether Elga realized that making fantastic assumptions matters is irrelevant—it does matter.
Your Argument 1 doesn’t seem persuasive to me, because I don’t see how Beauty can be said to have prior beliefs when she is unconscious.
The only way your Argument 2 makes any sense to me is as a way of introducing new experiences for Beauty—albeit ones involving an imaginary (and invisible!) coin. If you just accepted that the problem involves an actual person, this would be unnecessary, since real people have new, unique experiences all the time. If we’re talking about a version of S.B. in which Beauty is required to be an AI whose experiences are completely controlled by the experimenter, then I don’t think imaginary invisible coins are going to settle the issue.
Your Argument 3 seems persuasive to me, but I’m not sure why you find it persuasive. If you are unwilling to admit that Beauty has any experiences other than the single fact that she is awake, then it seems dubious that there are actually three awake people rather than one on Monday. If an AI is simulated redundantly by three computers operating synchonously, which all function properly, and so produce exactly the same thoughts of the AI, are there three AIs or only one?
“You’re failing to distinguish between though experiments that are only mildly-fantastic, like ones assuming perfectly fair coins, when real ones have (say) a 50.01% chance of landing heads, versus highly-fantastic thought experiments, such as ones assuming that on Sunday you know exactly, in complete detail, what all your experiences will be on Monday.”
I’m not failing to distinguish anything. I’m intentionally not bothering to distinguish what the problem statement says we should treat as indistinguishable. “While awake she obtains no information that would help her infer the day of the week.” Whether or not you think it is more realistic, the problem you are solving is not the Sleeping Beauty Problem.
And I’m not saying that Beauty’s experiences are the same. I’m just following the instructions in the problems statement, that says any information contained in the experiences of one day cannot be used to infer anything about the other.
And this is exactly what makes thought experiments interesting. Isolating one factor, and determining what its effect is when treated alone.
“Your Argument 1 doesn’t seem persuasive to me, because I don’t see how Beauty can be said to have prior beliefs when she is unconscious.” And she similarly can have different beliefs, then she can project during the experiment, than she had on Sunday. If she can project back a state in the past, why does it matter if she was awake. (If you want a comparison, you seem to be saying that the Sailor’s Child can’t hold a belief about the coin that was flipped before he was born.)
If you want real experiences in Argument #2, go back and read the Tim and Tom version. I just get tired of people saying “you changed the problem” when all I did was introduce an element that instantiates the day with out proving information, which is valid an necessary.
In #3, I have no problem with experiences that an external observers sees as differentiating the day, as long as Beauty can’t. The differences can exist, but provide Beauty with no information that identifies the day to her.
I think there’s no point in continuing this discussion. As far as I can tell, you are willing to admit that Beauty will have a variety of experiences on Monday, which will very probably be different from those she will have on Tuesday (if awake), although (by the conditons of the problem) she doesn’t know ahead of time how the two days will differ. But you are saying that she is NOT ALLOWED to use the fact that she has such experiences when reasoning about whether the coin landed heads. Debates in which one party to the discussion just says by fiat that the other party isn’t allowed to use certain forms of reasoning are pointless.
She is allowed any reasoning she wants to use. The condition explicitly stated in the thought problem (see https://en.wikipedia.org/wiki/Thought_experiment, for why we shouldn’t care about realism) is that experiences during the day will not help her to deduce what day it is, not that she can’t use it to determine her initial belief about the day or the coin.
What this means, is that if Xi represents her ordered experiences, with X0 representing only the experience of waking up as defined by the experiment, that Pr(Today=Monday|Xi+1) = Pr(Today=Monday|Xi) for all i>=0. Not that she can’t define Pr(Today=Monday|X0).
But you are right, there is no point in continuing if you insist on violating the problem statement.
As you’re defined it, the thought experiment is simply impossible to run in principle, since if you apply standard principles of probability, Pr(Today=Monday|Xi+1) is NOT equal to Pr(Today=Monday|Xi). It’s like saying that as a condition of your thought experiment, arguments about the solution must assume that pi is exactly equal to three.
And the purpose of a thought experiment, is to define how ideal concepts work when you can’t run them in principle. And strawman arguments do not change that.
First of all, I appreciate you trying to make more clear why you believe that “today is Monday” is not a legitimate classical proposition, in particular by linking to the article in IEP. I have skimmed the article although I may have missed something important.
Anyway, it seems to me that the issues that concerned the philosophers discussed in that article are mostly not really about whether classical logic is valid for indexicals. Classical logic as I understand it is just a bunch of assertions about propositions, like “for every proposition P, either P is true or P is false” and “for every proposition P, the proposition “P implies P” is true”. The only place where I have noticed a questioning of such assertions is in the dicussion of the sentence (7). But to me this seems like it is really an analysis of a statement of the form “P implies Q”, and the question at issue is whether Q is really the same as P or not. In cases where P and Q are in fact the same, there is no doubt that “P implies Q” is a true statement.
In particular, in the Sleeping Beauty case, we can assume that she can complete any particular chain of reasoning in a reasonably short timeframe, in particular without crossing midnight. If this is the case, then it seems that all instances of “today is Monday” in her reasoning will in fact refer to the same proposition. Thus, I don’t see why there is a problem.
There would certainly be a problem if Sleeping Beauty used an argument which by necessity stretched out over several days, such that she was assuming “today is Monday” meant the same thing on both days. But the problems with such an argument seem so obvious that I cannot imagine a detailed explanation would be necessary to get anyone to see them.
In any case, here is one thing that seems odd about your/Neal’s theory. Suppose you have a universe in which there are necessarily Boltzmann brains giving every possible experience. However, we can assume that these brains represent an extremely small proportion of all brains. (Some people think that this is a description of our universe.) Then it seems that you can never update your probabilities based on evidence, because every evidence you see is of the form “there is a brain that has such-and-such sequence of experiences” which you already knew was a necessary truth. It looks like you can still get the decision theory to work out by taking into account the fact that you have more control over universes in which there are more brains giving the sequence of experiences, but it still seems that you are throwing out the entire concept of probability in this case.
In my paper, I discuss how Full Non-Indexical Conditioning seems to break down if the universe is so large that someone with identical memories as you has a non-negligible chance of existing elsewhere in the universe. Note that this requires a VERY large universe—the size of universe we can actually observe in telescopes isn’t enough.
I go on to argue that although I don’t know how to resolve this issue, I think it’s likely that it has no relevance when addressing non-cosmological problems such as Sleeping Beauty or the Doomsday Argument. Sleeping Beauty in particular is only mildly fantastical (memory erasure) and is otherwise a mundane issue of local behaviour in our part of the universe. I don’t see why its solution should depend on whether the universe is large, very large, very VERY large, or infinite. I expect that even if Full Non-Indexical Conditioning needs to be modified somehow to cope with really large universes, the modification will not change the result for Sleeping Beauty. It’s sort of how physicists in 1850 probably realized there were a few puzzles regarding light and Newtonian physics, but nevertheless thought, correctly, that the resolution of those puzzles wouldn’t change the answers to questions of when bridges will collapse.
I think a variation of my approach to resolving the betting argument for SB can also help deal with the very large universe problem. I’ve taken a look at the following setup:
There are N Experimenters scattered throughout the universe, where N is very, very large. Each Experimenter tries to determine which of two hypotheses A and B about the universe are correct by running some experiment and collection some data. Let d be the data collected, and let y be the remaining information (experiences, memories) that could distinguish this Experimenter from others.
It is possible to choose N so large that the prior probability approaches one that there will be some Experimenter with that particular d and y , regardless of whether A or B is true. This means that the Experimenter’s posterior probability for A versus B will update only slightly from its prior probability.
And yet if the Experimenter has to make a choice based on whether A or B is true, and we weight the payoffs according to how many Experimenters there are with the same y and d (as done in my analysis for SB), then the maximum-expected-utility answer does not depend on N: from the standpoint of decision-making, we can ignore the possibility of all those other Experimenters and just assume N=1.
Interesting. I guess for this to work, one has to have what one might call a non-indexical morality—one that might favour people very, very much like you over others, but that doesn’t favour YOU (whatever that means) over other nearly-identical people. (i”m going for “nearly-identical” over “identical”, since I’m not sure what it means for there to be several people who are identical.) It seems odd that morality should have anything to do with probability, but maybe it does....
Fair enough. I just thought it was a kind of weird thing for a theory to be sensitive to. I guess the theory is self-consistent although it’s not clear to me how well it matches with the intuitive concept of “probability”.
Carl works Monday through Friday in the European Rain Recording Society in Berlin. He records daily data from two field agents: Colin in London, and Carlos in Madrid.
On Sunday Night, the temporary janitor in his office is careless with his cigarette, and accidentally sets fire to some papers on Carl’s desk. Most are totally destroyed; just the bottom half of one piece of paper remains. It says “It rained here today.” Knowing nothing of the work that is performed in the office, he thinks this is a very odd message. “Today” and “here” are indexicals, and cannot have meaning at face value alone. So the note seems to convey no information. Still, he leaves it on Carl’s desk.
When Carl arrives at work on Monday, and the accident is explained to him, he knows some details that can provide context. Each report he receives from his agents is a single piece of paper with a header at the top that states the date and the name of the reporting agent.
Case A: His agents only report on days when it rained. So he knows that it rained at least once, in at least one of the two cities. This isn’t very useful to Carl, but it isn’t “nothing.”
Case B: His agents report every day, rain or shine. So he knows that four reports were one his desk when the fire started. The burned papers included three reports, and the half-burned one is the fourth. In addition to the information in case A, he can now place a 25% probability on each of the propositions that the report referred to Saturday in London, Saturday in Madrid, Sunday in London, and Sunday in Madrid, respectively.
In other words, when the words “today” and “here” can only be used in a context that resolves the issues linguistics has with indexicals, there is no such issue present. This is different from knowing that context, which is where probability is used.
+++++
“Indexical” is an adjective meaning “of or pertaining to an index.” As used in linguistics, and discussed in the article, it also means “without sufficient context to determine the value that is indexed.” Much like the pronouns “he, she, it, that, …” when no antecedent is evident. With the added meaning, it refers to the usage of the word, not to the word itself.
If an index’s context is not supplied, as in the janitor’s reading of the half-burned paper, it conveys no meaning. But if the context is explicit, as in “Colin Cumberbund; London, England; Saturday June 2, 2018: It rained here today,” then the meaning is explicit despite the fact that by itself the word “today” conveys no information.
So it is a non sequitur to say that such words cannot be used in a logic problem. They can, if context is supplied for the index. And in a probability problem, they can be used if the context narrows the range to a set of values—a sample space—to which you can attach a probability distribution.
Example: On Sunday, Beauty is put to sleep. A six-sided die is rolled. Beauty is wakened on whichever day, over the next six days, that is indexed by the die roll (1=Monday, 2=Tuesday, etc.). She is asked “What is the probability that today is Wednesday?”
The word “today” does not refer to the entire range of days Monday thru Saturday. It is used on only one day, and it has one unchanging value over the period when Beauty is awake. The fact that she does not know the value makes it a random variable, not an unfathomable reference. The answer to the question is 1⁄6.
The answer is the same if she is woken twice (with the amnesia drug), based on rolling two dice until the result is not doubles. Or if she is woken once or twice by accepting doubles. It can be used because, even if she is awake on another day, its usage refers to a fixed index into the range. The answer is 1⁄6 because no day is preferred over the others, even when the number of awakenings is uncertain.
+++++
I agree with the above analysis, about betting arguments. But not about the rest.
The error in the above argument, is that the details of the experiment do provide context for “today.” But as a random variable, not an explicit value. This is complicated by the coin toss, but using the non sequitur “‘today’ is an indexical so we can’t evaluate it” is a placebo used to avoid analyzing the context. “Raising technical challenges” does not mean “challenges that can’t be met,” it actually means they can.
Still, there are ways to avoid using an indexical in a solution. I suggested one in a comment to part 1: use four Beauties, where each is left asleep under a different combination of {Coin,Day}. Three are wakened each day of the experiment. One of those three will be awakened only once during the experiment. They are asked, essentially, “what is the probability that you will be awake only once?” It was agreed that this question is equivalent the original problem. Since there is no information that makes any of the three more, or less, likely to be the one, the answer is 1⁄3.
Another is to use identical twins Tim and Tom as interviewers. They play “Rock, Paper, Scissors” on Sunday night to see who will interview Beauty first. Beauty can describe the possible outcomes of the experiment based on the two propositions C=”Heads” and I=”Tim first.” Each outcome in the sample space {TT, TF, FT, FF} has a prior (that is, on Sunday) probability of 1⁄4.
Case A: They wear nametags to the interview. If Beauty sees that she is being interviewed by Tim, she knows that the outcome is not TF. That is, that it is not possible that the coin was/will be Heads and Tom interviews first. From this, she can update her probability for the proposition C=True from 1⁄2 to 1⁄3. She can do the same if she is interviewed by Tom.
Case B: They conceal their names. She can assign a probability Q to the proposition that her interviewer is TIm, and 1-Q to the proposition that it is Tom. Regardless of what value Q has, the Law of Total Probability says the answer is 1⁄3.
You seem to think that “random” variables are special in some way that avoids the problems of indexicals. They are not. When dealing with epistemic probabilities, a “random” variable is any variable whose precise value is not known with complete certainty.
This is not what I understood you to be proposing. As described here, I would say that this is not the same as the original question, and does not avoid using an indexical. You have simply camouflaged the indexical by omission when you write that “One of those three [who are awake today] will be awakened only once during the experiment.”
The situation with indexicals is similar to the situation with “irrelevant” information. If there is any dispute over whether some information is irrelevant, you condition on it and see if it changes the answer. If it does, the judgment that the information was irrelevant was wrong.
Same thing with indexicals. You may claim that use of an indexical in a proposition is unambiguous. The only way to prove this is to actually remove the ambiguity—replace it with a more explicit statement that lacks indexicals—and see that this doesn’t change anything. So for your burned paper analogy, “today” and “here” are replaced by “the day on which Carl wrote this note” and “the city of origin for the call for which Carl took this note”. For the dice-throwing example, “What is the probability that today is Wednesday?” can be replaced by “What is the probability that the day on Beauty experiences y is Wednesday” because there can only be one such day, in which her last memories before her last sleep were from Sunday.
When we try this for the SB problem, however, a nonzero probability of ambiguity remains. Neal gives one way of removing the ambiguity in terms of information to which Beauty actually has access, that is, her memories and experiences. Doing that leads to an answer that is close to, but not quite the same as, treating “today” as unambiguous. If Beauty has exactly the same experiences y on both Monday and Tuesday, she cannot disambiguate “today”.
That leaves you with a choice: either you must agree that “today” is ambiguous in this problem, or you need to propose a different way of rephrasing the statement “today is Monday” into a form that removes the indexicals and then condition on information Beauty actually has.
I think your equation Pr(H|M)=∑iPr(X2(i)|M)⋅p is incorrect [EDIT: actually correct but wouldn’t be for computing probability of tails, see bottom]. I assume you intend p to mean Pr(H|X2(0),M) (which equals Pr(H|X2(1),M) ). So the equation says Pr(H|M)=∑iPr(X2(i)|M)⋅Pr(H|X2(i),M) . But, as you point out, X2(0) and X2(1) are not mutually exclusive. So this equation doesn’t follow from probability theory.
In general if you have events A,B,C where B and C are not mutually exclusive, then it is not necessarily the case that Pr(A)=Pr(B)⋅Pr(A|B)+Pr(C)⋅Pr(A|C) . For example, say that A,B,C are all always true. Then the equation says 1=1⋅1+1⋅1 .
EDIT: the equation is actually true. This is because because X2(0) and X2(1) are mutually exclusive given H. But the equation would be false if H were replaced with T in the equation, since X2(0) and X2(1) are not mutually exclusive given T.
You’re right, my argument wasn’t quite right. Thanks for looking into this and fixing it.
This does not follow. There’s lots of interest in things (such as constructing alternative logics) that are driven by pure curiosity, status-seeking, and other reasons which have nothing to do with solving a problem. I support many of these reasons, and it’s fun, but that doesn’t mean I have to reject simpler tools when they work. There is no problem with indexicals in classical logic—you can construct clear propositions for any indexical case.
Your thirder argument (I lose twice for tails, but only win once for heads) sounds right for me, the halfer argument is even simpler (I lose for tails, win for heads, just do the math). The key is that they are _different_ propositions. The thirder argument is correct if the bet is resolved once on heads and resolved twice on tails. The halfer argument is correct if the bet is resolved once, no matter how many times the subject is awoken.
When we say “The dog found a bone”, we are assuming that the person we’re speaking to will be able to figure out which dog we’re referring to. The proposition is about that dog. It’s not about the process the speaker goes through to infer which dog we’re talking about, which could involve all sorts of background knowledge regarding the speaker, the listener, the dogs in the neighborhood, laws restricting when dogs are allowed off leash, etc. The same for “I” or “today”. It’s a convenience in ordinary speech. There’s no need to introduce this issue in the Sleeping Beauty problem, since the problem is about inferring the result of a single coin flip, not about inferring what day of the week today is (whatever that means).
I’m not sure I understand what you’re saying about bets. Surely you’re not saying that Beauty’s belief about whether the coin flip was Heads depends on what betting scheme (if any?) has been set up? For the “only one” bet case, you might want to look at my reply to Part 1 of this post.
I think I agree completely with your first paragraph. There’s no need to introduce indexical complexity, and classical probability copes with this just fine.
In all cases, the probability is 50% (or, once known, it’s 1 or 0). The casual discussion of betting methodology conflates probability and payouts. If there’s a 50% that you’ll lose twice and a 50% chance that you’ll win once, you should require a 2:1 payout on the bet, which leads people to say 33% probability when they combine the two. If you’ll only lose once EVEN WHEN ASKED TWICE, then 1:1 odds are fine and 50% is clear.
I don’t know what you mean by “in all cases, the probability is 50%”. What situation are you referring to? I’m arguing that Beauty’s probability of Heads should be 1⁄3 when woken on Monday or Tuesday, and I believe this is quite consistent with obtaining good results in any betting scenario, when the decision is made properly based on this probability. If you think Beauty’s probability for Heads should be 1⁄2, you need to do more than just assert this.
Sleeping Beauty (SB) volunteers for this experiment, and is told all these details by a Lab Assistant (LA):
I will put you to sleep tonight (Sunday) with a drug that lasts 12 hours. After you are asleep, I will flip two coins—a Dime and a Nickel. I will lock them in an opaque box that has a sensor which can tell if at least one coin inside is showing Tails.
I will then administer a drug to myself, that erases my memory of the last 12 hours, and go to sleep in the next room.
Until I am stopped (which will happen on Wednesday morning), when I wake up in the morning I will perform the following procedure:
If the box’s sensor says neither coin is showing Tails, I will administer a drug to you (in your sleep) that extends your sleep another 24 hours.
If the box’s sensor says that at least one coin is showing Tails, I will let you wake up. I will sa to you: “Before I looked at the box this morning, the probabilities the coins were showing HH, HT, TH, or TT were all 1⁄4. Now that we’ve proceeded to the awake portion of this procedure, what probability should each of us give that the Dime is currently showing Heads?” After receiving an answer, I will administer the amnesia drug to you, and then the 12-hour sleep drug.
In either case, after you are asleep I will open the box, turn the Nickel over to show its other face, administer the amnesia drug to myself, and go to sleep in the next room.
Questions:
1) Is the question “What side is the Dime currently showing?” functionally different, in any way and on either day, than the question “How did the Dime land on Sunday Night?”
2) Is LA’s probability distribution wrong in any way?
I think these answers are both “no.” So LA can answer the probability question (s)he asks. Since case HH is eliminated, in LA’s world the probability that the Dime is showing Heads is 1⁄3.
3) Is SB’s prior probability distribution for the two coins the same as LA’s?
4) Is SB’s information the same as LAX’s?
I think these answers are both “yes.” SB’s answer is the same as LAX’s. The probabiltiy is 1⁄3.
As far as I can tell, a halfer will say that the answer to #4 is a definite “no.” #3 is unclear to me, but how they address it seems to be how they justify saying her information is different.
I think Halfers use a Shcrodinger’s-Cat-like argument where HH and HT both true at the same time. HH cannot be eliminated because HT can’t. Literally, they seem to say that SB can’t consider the current state of the Nickel (which corresponds to the day in the original experiment) to be a random variable, since it shows both faces during the experiment. That’s an invalid argument here, since the sensor is based on what it is currently showing.
5) How is this question, in SB’s world, any different than the original SB problem?
I’d really like to hear a halfer’s answer. Because it isn’t.
Here is yet another variation of the problem that I think perfectly identifies the source of the controversy. The experiment’s methodology is the same as the original, except in these four details:
(1) Two coins, a Nickel and a Quarter, are flipped on Sunday Night.
(2) On either day of the experiment, Beauty is wakened if either of the two coins is showing Tails.
(3) On Monday Night, while Beauty is asleep, the Nickel is flipped over to show its opposite face.
(4) Beauty is asked the same question, but about the Quarter.
The only functional difference is that there is a 50% chance that the “optional” waking occurs on Monday instead of Tuesday. Since Beauty does not know the day in either version, this cannot affect the result; she is still wakened once if the Quarter landed on Heads, and twice if it landed on Tails.
The controversy boils down to whether the current state of the Nickel can be called a random variable. Or, much like Schrodenger’s Cat while its box is unopened, the Nickel has to be considered to be in both states simultaneously for the purposes of the experiment.
Halfers treat it as both. The Nickel shows both Heads and Tails during the experiment, so Beauty cannot use it as a random variable. This is the crux of Radford Neal’s argument, in the original experiment. That “Today” is an indexical becasue it has both the value “Monday” and “Tuesday” during the experiment, so it can’t be used as evidence.
The thirder’s argument is that what the Nickel is currently showing is not an indexical at all. While Beauty is awake, it has only one value. That value is unknown, and can have either value with probability 50%. So there are four states for {Nickel, Quarter} that, at any time during the experiment, are equiprobable in the prior. And that the evidence Beauty has, based on the fact that she is awake, eliminates {Heads, Heads} as a possibility. This makes the probability that the Quarter landed on Heads 1⁄3.
I don’t understand the reasoning for using irrelevant information.
If you are saying that there is twice the probability of experiencing y “at least once” on tails, doesn’t that fail for the same argument Conitzer gave against halfers? His example was that you wake up both days and flip a coin. If you flip heads, what is the probability that both flips are the same? You are twice as likely to experience heads at least once if the coin tosses are different. But it is irrelevant. The probability of “both the same” is still 1⁄2.
On the other hand, in reality there might be some relevant information (such as noticeable aging, hunger, etc) but the problem is meant to exclude that.
I don’t understand your question. Are you saying that Beauty flips a coin whenever she wakes up? And she then wonders whether the coin she just flipped is the same as another coin she has flipped or will flip? But she may not wake up on Tuesday, in which case there aren’t two flips, so I don’t understand....
I’m referring to an example from here: https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf where you do wake up both days.
Your argument seemed similar, but I may be misunderstanding:
“Treating these and other differences as random, the probability of Beauty having at some time the exact memories and experiences she has after being woken this time is twice as great if the coin lands Tails than if the coin lands Heads, since with Tails there are two chances for these experiences to occur rather than only one.”
It sounds like you are conditioning on “at least once such experiences occur”. That is, if Beauty wakes up and flips a coin, getting heads, and that’s the only experience she has so far, she will condition on “at least one heads.” This doesn’t seem generally correct, as the linked example covers. Doesn’t it also mean that, even before the coin flip, she would know exactly how she was going to update her probability afterward, regardless of result?
Perhaps the issue here is that if you wake up and flip heads, that isn’t the same thing as if, on Sunday, you asked “will I flip at least one heads?” and got an affirmative answer. The latter is relevant to the number of wakings. The former is not.
The crucial point is that Beauty’s experiences on wakening will not be confined to whatever coin flips may have been added to the experiment, but will also include many other things, such as whether or not her nose itches, and how much the fluorescent light in her room is buzzing. The probability of having a specific set of ALL her experiences is twice as great if she is woken twice (more precisely, approximately twice as great, if this probability is small, as it will be in any not-very-fantastical version of the problem).
Arguing that whether or not her nose itches is irrelevant, and so should not be conditioned on, is contrary to the ordinary rules of probability, in which any dispute over whether some information is relevant or not is settled by simply including it, which makes no difference if it is actually irrelevant. Refusing to condition on such information is like someone who’s solving a physics problem saying that air resistance can be ignored as negligible, and then continuing to insist that air resistance should be ignored after being shown a calculation demonstrating that including its effects has a substantial effect on the answer.
The question is about what information you actually have.
In the linked example, it may seem that you have precisely the information “there is at least one heads.” But if you condition on that you get the wrong answer. The explanation is that, in this type of memory loss situation, waking up and experiencing y is not equivalent to “I experience y at least once.” When you wake up and experience y you do know that you must experience y on either monday or tuesday, but your information is not equivalent to that statement.
If you asked on sunday “will I experience y at least once?” then the answer would be relevant. But if we nailed down the precise information gained from waking up and experiencing y, it would be irrelevant.
Beauty’s information isn’t “there is at least one head”, but rather, “there is a head seen by Beauty on a day when her nose itches, the fly on the wall is crawling upwards, her sock is scrunched up uncomfortably in her left shoe, the sharp end of a feather is sticking out of one of the pillows, a funny tune she played in high school band is running though her head, and so on, and so on.”.
I’m talking about the method you’re using. It looks like when you wake up and experience y you are treating that as equivalent to “I experience y at least once.”
This method is generally incorrect, as shown in the example. Waking up and experiencing y is not necessarily equivalent to “I experience y at least once.”
If you yourself believe the method is incorrect when y is “flip heads”, why should we believe it is correct when y is something else?
After my other response to this, I thought a bit more about the scenario described by Conitzer. A completely non-fantastic version of this would be as follows (somewhat analogous to my Sailor’s Child problem, though the whole child bit is not really necessary here):
You have two children. At age 10, you tell both of them that their Uncle has flipped two coins, one associated with each child, though the children are told nothing that would let them tell which is “their” coin. When they turn 20, they will each be told how “their” coin landed, in two separate rooms so they will not be able to communicate with each other.. They will then be asked what the probability is that the two coin flips were the same. (The two children correspond to two awakenings of Beauty.)
If you are one of these children, and are told that “your” coin landed heads, what should you give for the probability that the two flips are the same? It’s obvious that the correct answer is 1⁄2. But you might argue that their are four equally-likely possibilities for the two flips—HH, HT, TH, and TT—and that observing a head eliminates TT, giving a 1⁄3 probability that the two flips are the same.
This is of course an elementary mistake in probabilistic reasoning, caused by not using the right space of outcomes. Suppose that one of the children is left-handed and one is right-handed. Then there are actually eight equally-likely possibilities—RHH, LHH, RHT, LHT, RTH, LTH, RTT, LTT—where the initial R or L indicates whether the first coin is for the right-handed child or the left-handed child. Suppose you are the right-handed child. Observing heads eliminates LHT, RTH, RTT, and LTT, with the remaining possibilities being RHH, LHH, RHT, and LTH, in half of which the flips are the same. So the answer is now seen to be 1⁄2.
But why is this the right answer to this non-fantastical problem? (I take it that it is correct, and that this is not controversial.) The reason is that we know how probability works in ordinary situations, in which personal identities are clear, because everyone has different experiences. If instead we make a fantastic assumption that the two children are identical twins raised apart in absolutely identical environments, and therefore have exactly the same thoughts and experiences, up until the point at age 20 when they are told possibly-different things about how their coins landed, it may not be so clear that 1⁄2 is the right answer. It’s also not so clear that this fantastic scenario is possible, or of interest. It certainly would not be a good idea to treat it as being just the same as the non-fantastical scenario, apart from a little simplifying assumption about identical experiences...
In any not-completely-fantastical scenario, Beauty’s experiences on Monday are very unlikely to be repeated exactly on Tuesday, so “experiences y” and “experiences y at least once” are effectively equivalent. Any argument that relies on her sensory input being so restricted that there is a substantial probability of identical experiences on Monday and Tuesday applies only to a fantastical version of the problem. Maybe that’s an interesting version of the problem (though maybe instead it’s simply an impossible version), but it’s not the same as the usual, only-mildly-fantastical version.
I’m having a really hard time pinpointing where there’s an error in the analysis, but something is still just not right. There is no indexical uncertainty in locating the event of the coin being flipped on Tuesday. Any information relevant to that event can only be considered information if there is a different probability of it being received if the coin is heads rather than tails. Any stream of bits that Beauty receives has the same probability no matter what that event is. So her probability of that event simply cannot be updated in any direction. Where does this reasoning go wrong?
On the contrary, the probability that Beauty has some particular set of Monday/Tuesday experiences is twice as great if she is woken on both days than if she is woken only on Monday (assuming that the probability in any case is very small, as it will be for any ordinary set of waking experiences).
Have you looked at my Sailor’s Child problem (in the referenced paper)? It is intended to be completely analogous to the only-mildly-fantastical (ie, original) version of Sleeping Beauty, while being totally non-fantastical. I assume that agreement can be obtained on completely non-fantastical problems, and I think the answer for Sailor’s Child is clearly 1⁄3, not 1⁄2, so if it is indeed analogous to Sleeping Beauty, that shows that 1⁄3 is the correct answer there as well. If you think it is not analogous, then in what relevant way is it different?
My paper also lists various other variations on Sleeping Beauty (eg, introducing a “Prince”), which also seem to me to definitively establish that the correct answer is 1⁄3. Plus there are the betting arguments, including one I talk about in my reply to Part 1 of this post. They also see definitive to me, unless you are willing to turn “probability” into something that isn’t a guide to decision-making.
Ok, this sentence made everything snap into place for me. Thanks. The Sailor’s Child problem is also helpful. This has been an interesting journey. I was originally a one-thirder based on betting arguments, and then became convinced from the original post that that is indeed a red herring, and so momentarily became a halfer, and now that you’ve clarified this I’m back to being a thirder to within epsilon.