I’m afraid this makes no sense to me. I think this comes from my not understanding how the concept of a “reference class” can possible work. So I have no idea what it could mean to “observe the world from the perspective of any human that is male”, if observing from that “perspective” is supposed to change the probability (or render the probability meaningless) of some statement that I would take to be about the actual, real, world.
As I’ve pointed out before, the Sleeping Beauty problem is only barely a thought experiment—with a slight improvement over current memory-affecting drugs, it would be possible to actually run the experiment. It’s not like a thought experiment involving hypothetical computer simulation of people’s brains, or some such, in which one might perhaps think that common sense reasoning is not applicable.
So consider an actual run of the experiment. Suppose that at the time Beauty agrees to take part in the experiment, she fails to remember that she had already agreed to participate in a different experiment on Monday afternoon. The Sleeping Beauty experimenters have promised to pay her $120 if she completes their experiment, while the other experimenters have promised to pay her $120+X, and her motivation is to maximize the expected amount of her earnings. On some awakening during the Sleeping Beauty experiment, Beauty realizes that she had forgotten about the other experiment, and considers leaving to go participate in it. Of course, she then wouldn’t get the $120 for participating in the Sleeping Beauty experiment, but if it’s Monday, she would get the $120+X for participating in the other experiment. Now if it’s Tuesday, the other experiment has already been cancelled. So she needs to consider the probability that it’s Monday in order to make a good decision.
It’s not actually relevant to my point, but here is how it seems to me the probabilities work out. Suppose that Beauty has probability p of remembering the other experiment whenever she awakens, and suppose that this is independent for two awakenings (as is consistent with the assumption that her mental state is reset before a second awakening). To simplify matters, let’s suppose (and suppose that Beauty also supposes) that p is quite small, so the probability of Beauty remembering the other experiment on both awakenings (if two happen) is negligible.
Since p is small, Beauty’s probability for it being Monday given that she has woken and remembered the other experiment should be essentially as usual for this problem, with the answer depending on whether she is a Halfer or a Thirder. (If p were not small, she might need to downgrade the probability of Tuesday because there might be a non-negligible chance that she would have left the experiment on Monday, eliminating the Tuesday awakening.)
If she’s a Thirder, when she wakes and remembers the other experiment, she will consider the probability that it is Monday to be 2⁄3, and will leave for the other experiment if (2/3)(120+X) is greater than 120, that is, if X is greater than 60. If she is a Halfer, it’s harder to say, since Halferism is wrong, but let’s suppose that she splits the 1⁄2 probability of two awakenings equally, and hence thinks the probability of it being Monday is 3⁄4. She will then leave if (3/4)(120+X) is greater than 120, that is, if X is greater than 40. We can also look at things from a frequentist perspective, and ask what her expected payment is if she always decides to leave when she remembers the other experiment. It will be (1-p)120 + p(120+X) conditional on the coin landing Heads, and (1-p)(1-p)120 + p(120+X) conditional on the coin landing Tails, for a total expectation of (1-(3/2)p)120 + p(120+X), ignoring the p-squared term as being negligible. This simplifies to (1-p/2)120 + pX, which is greater than 120 if X is greater than 60, in agreement with the Thirder reasoning.
In any case, though, if I’ve understood you correctly, you deny that there is any meaning to “the probability that it’s Monday” in this situation. So how is Beauty supposed to decide what to do?
I do find some of the ideas related to anthropic reasoning hard to express. Let me try another expression and see if it is better. “The probability of me being a man” in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. This is what I meant by “experiencing the world from the perspective of a man”. I’m think even though “I’m a man” is a valid statement “the probability of me being a man” does not exist. It might be tempting to say the probability is simply 1⁄2. But that implies I can only be a man or a women. The probability of me being anything else such as a chimpanzee or an alien is zero. There is no basis for that. Further problems involves the possibility of me not even born into this world at all. Trying to assign a value to this probability is impossible. Because to construct a sample space a reference class containing “me” is needed. However from the first-person perspective this “me” is defined by the perspective center. It is inherently unique. i. e. there is nothing else in its reference class. So a first-person identity cannot be used in such questions. Someone has to be identified from a third-person perspective instead. Because from a third-person perceptive no one is inherently special, so to specify someone would involve a process to single it out. Information about this process would determined the reference class. This is why both SSA and SIA argues the first-person “me” is conceptually equivalent to someone randomly selected from a certain group. Because that way it gives a reference class subsequently allows them to assign a value to such probabilities. However this equivalency means mixed reasoning from the two perspectives. Using the first-person “me” interchangeably with the proposed third-person identity leads to other anthropic paradoxes.
I do agree that sleeping beauty experiment is physically possible. For the experiment to take place the memory wipe doesn’t even need to be perfect. As long as it is accurate enough to fool the human mind it will work. Since human mind can only contain finite amount of information nothing in theory denies its feasibility. I also appreciate you present your argument by experiments and numbers. I find discussing clearly defined examples much easier. Allow me to explain our differences.
First of all I agree with the calculations (with the exceptions of frequentist repetitions which I discusses in Section 4). Secondly I also agree maximizing someone’s own earning would force the decisions to reflect the probability. Our differences is regarding how should the reward be handled. The whole reason sleeping beauty problem is related to anthropic reasoning is because it involves an observer duplication. That is with the memory wipe Monday Beauty and Tuesday Beauty are two separate entities (at least from their own perspectives). So they should have distinct rewards. Monday beauty’s correct decision should benefit Monday Beauty alone and Tuesday Beauty’s wrong decision should punish Tuesday Beauty only. In the example you presented that is not the case. The potential reward that is always given to whoever exists at the end of the experiment. In this kind of setup even when beauty experiences the status reset her rewards never do. It is like her money doesn’t participates in the duplicating experiment as she does. I disagree with this discrepancy. In this question beauty is no longer trying to maximize the reward to the obvious first-person “me” but to maximize the cumulated reward at the end of the experiment. This shift in objective means instead of strategizing directly from the first-person perspective beauty should strategize from the perspective of a non participant, i.e. a third person. So instead of using first-person center to define a self-explanatory “today”, the specific day in question is defined in third person. For example, it is calculating the probability that the day beauty remembers the experiment is a Monday.
Constructing a sleeping beauty experiment with appropriate rewards and repetitions is quite troublesome. (for example questions such as “if you do not have a chance to spend the money is it still a reward” comes into play). I want to present my argument in a different but also physically feasible experiment. Suppose when you go to sleep tonight a clone of you would be created and put into an identical room. The clone is highly accurate it retains the memory good enough so he fully believes he is the one fall asleep yesterday. After waking up in the experiment you ask yourself “what is the probability that I am the original?”. My position is there is no such probability. Notice the “I” here is the self-explanatory “I” from first-person perspective. If a third-person specification is used instead, e.g. “the probability of a randomly chosen copy being the original”, then there is no mix of perspectives and it is obviously valid. Now suppose if guessed correctly a reward of $1 would be given to you. The experiment is repeated many times. I.e. When fall asleep again on the second day another clone would be created. Same happens the third day, etc. During this process your money is cloned along with you. Everyday you get a chance to guess if you are the original regarding the previous night. Here to earn the maximum reward for yourself the only consideration is to guess correctly. I argue there is no strategy for that objective. Again notice I’m not trying to come up with a strategy that would maximize the total or average money owned by all copies. The objective is much more direct: to maximize the money owned by me, myself.
“The probability of me being a man” in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. … even though “I’m a man” is a valid statement “the probability of me being a man” does not exist
Here, you have imported some highly questionable ideas, which would seem to be not at all necessary for analysing the Sleeping Beauty problem. This is my core objection to how Sleeping Beauty is used—it’s an almost-non-fantastical problem that people take to have implications for these sorts of anthropic arguments, but when correctly analysed, it does not actually say anthing about such issues, except in a negative sense of revealing some arguments to be invalid.
You should also note that your use of “probability” here does not correspond to any use of this word in normal reasoning. To see this, consider “the probability of my having blue eyes”. It take this to be in the same class as “the probability of me being a man”, but it allows for less-ridiculous thought experiments. Suppose you are a member of an isolated desert tribe. There are no mirrors, and no pools of water in which you could see your reflection. The tribe also has a strong taboo against telling anyone what colour their eyes are. So you don’t know what colour your eyes are. Do you maintain that “the probability that my eyes are blue” does not exist? Can’t you look at the other members of the tribe, see what fraction have blue eyes, and take that as the probability that you have blue eyes? Note that this may have practical implications regarding how much care you should take to avoid sun exposure, to reduce your chance of developing glaucoma.
I assume that you do think “the probability that my eyes are blue” is meaningful in this scenario. You seem to have in mind only something like prior probabilties, not conditional on any observations. But all actual practical uses of probability are conditional on observations, so your discussion is reminescent of the proverbial question of “how many angels can dance on the head of a pin?”.
I also agree maximizing someone’s own earning would force the decisions to reflect the probability.
I’m not sure what exactly you’re agreeing about here. Do you maintain that “the probability that it is Monday” does not exist, until Beauty happens to remember the other experiment, at which point it suddenly becomes meaningful? If so, why can’t Beauty just magine that there is some such practical reason to want to know whether it is Monday, calculate what the probability is, and then take that to be the probability of it being Monday even though she doesn’t actually need to make a decision for which that probability would be needed? Seems better than claiming that the probability doesn’t exist, even though this procedure gives it a well-defined value...
The whole reason sleeping beauty problem is related to anthropic reasoning is because it involves an observer duplication. … So they should have distinct rewards. Monday beauty’s correct decision should benefit Monday Beauty alone...
There’s a methodological issue here. I’ve presented a variation on Sleeping Beauty that I claim shows that “the probability that it’s Monday” has to be a meaningful concept for Beauty. You say, “but if I look at a different variation, that arguement doesn’t go through.” Why should that matter, though? If my variation shows that the probability is meaningful, that should be enough. If this shows that Sleeping Beauty is not related to anthropic reasoning, so be it.
However, there’s no problem making the reward be for “Beauty in the moment”. Suppose that when Beauty wakes up, she sees a plate of cookies. She recognizes them as being freshly baked by a bakery she knows. She also knows that on Mondays, but not Tuesdays, they put an ingredient in the cookies to which she is mildly allergic, causing immediate, painful stomach cramps. She also knows that the cookies are quite delicious. Should she eat a cookie? Adjust the magnitudes of possible pleasure and pain as desired to make the question interesting. Shouldn’t the probability of it being Monday be meaningful?
Suppose when you go to sleep tonight a clone of you would be created and put into an identical room. The clone is highly accurate it retains the memory good enough so he fully believes he is the one fall asleep yesterday.
Note that this is now a completely fantastical thought experiment, in contrast to the usual Sleeping Beauty problem. It may be impossible in principle, given the quantum no-cloning theorem. I also don’t know how this is supposed to work in conjunction with your previous reference to “souls”. I don’t think this extreme variation actually shows anything interesting, but if it did, you’d need to ask yourself whether the need to resort to this fantasy indicates that you’re in “angels dancing on the head of a pin” territory.
Thank you for the speedy reply professor. I was worried that with my slow response you might have lost interest in the discussion.
Forgive me for not discussing the issues in the order you presented. But I feel the most important argument I want to challenge is that sleeping beauty problem is physically possible but the cloning experiments are strictly fantastical.
In the cloning experiment the goal is not to make a physically exact copy of someone but to make the copy accurate enough that a human could not differentiate. Which is no different from the sleeping beauty problem. Considering the limitations of human cognitive ability and memory this doesn’t remotely require a exact quantum state copy. Unless you take the position that the human memory is so sophisticated it is quantum state dependent. But then it means to revert beauty’s memory to a earlier state would require her brain to change back to a previous quantum state. Complete information about that quantum state cannot be obtained unless she is destroyed at the time. I.E. Sleeping beauty would be against no-cloning theorem thus non-feasible as well. Apart from memory there is also the problem of physical differences. It is understood during the first day beauty would inevitably undergone some physical changes. E.g. her hair may grow, her skin is aging a tiny bit, etc. This is not considered a problem for the experiment because it is understood human cannot pick on such minuscule details. So even with these physical changes beauty would still think this is her first awakening after waking up the second time. The same principle applies to the cloning example. As long as the copy is physically close enough for human’s cognitive ability to not notice any differences the clone would believe he is the original. In summary if sleeping beauty problem is physically possible the cloning example must be as well. In this problem after waking up in the experiment the “probability of me being the original” makes no sense. Even if you consider repeating the experiment many times there is no answer to it. Again it is referring to the primitively understood first-person “me” not someone identified form a third-person perspective such as “the probability of a randomly selected copy being the original.”
As for the question of my soul getting incarnated into somebody. This is not my idea. Various anthropic school of thoughts lead to such an expression. For example in Doomsday Argument’s prior probability calculation SSA argues I am equally likely to be born as any human ever exists. SIA adds on top of it and suggests I am more likely to be born into this world if more human ever exists. They both closely resemble the idea of soul embodiment into a pool of candidate. I mentioned this expression because it neatly describes what “the probability of me being a man” refers to in the anthropic context. And let’s not loose the big picture here that I am arguing such probabilities do not make sense. So I complete agree with you that using soul embodiment as an experiment to assign probability is highly questionable. In fact I am arguing such notions are outright wrong.
Regarding the case of eye color quite clearly we are not discussing anything resembling the above idea. By surveying other people in the tribe I would know what percentage of the tribesman have blue eyes. If we say that percentage is the probability of someone having blue eyes then there is an underlying assumption this someone is an ordinary member of such tribe. He is not special. This is against the first-person perspective where I am inherently different from anybody else. Meaning this person is identified among the tribesman from a third-person perspective. Therefore that percentage is not the probability the first-person “I” have blue eyes. But rather the probability of an randomly selected tribesman have blue eyes. A optimal level of avoiding sun exposure can be derived from that survey number. However it cannot be said that strategy is optimal for myself. All we know is that if every tribesman follows this strategy then it would be optimal for the tribe as a whole.
I think by using a betting argument there is an underlying assumption that someone trying to maximize her own earning would follow a strategy determined by the correct probability. This I agree. However that is when the decision maker and the reward receiver is the same person. It is to say if beauty is contemplating the probability of “today” being Monday, then the reward for a correct guess should be given to today’s beauty. That’s what I meant by Monday Beauty’s correct decision should reward Monday Beauty alone. In the setup you presented that is not the case. In your setup the objective is to maximize the accumulated earnings. For this objective the concept of a self-explanatory “today” is never used. So the calculation is not reflecting the probability of “today being Monday”. But rather reflecting the probability that “the day beauty remembers the previous experiment is Monday”. Essentially it has the same problem as the eye color example. The first-person center concept of “today” is switched to a third-person identity. If we go back to the cloning experiment, you are arguing after waking up “the probability of a randomly selected copy being original” is valid and meaningful. I agree with this. I am arguing using first-person center “the probability ‘me’ being the original” do not exist.
For the cookie experiment yes the painful reaction and delicious bliss are of course meaningful. But it only means “today is Monday” and “today is Tuesday” are both meaningful to her. This I never argued against. However if a probability of “today is Monday” exists then there should be an optimal strategy for “beauty in the moment” to maximize her pleasure. Notice strategies exists to maximize the pleasure throughout the two day experiment. Strategies also exists to optimize the pleasure of beauty exists at the end of experiment. But there is no strategy to maximize the pleasure of this self apparent “beauty in the moment”. We can even repeat the experiment for this “today’s beauty”. Let her sleep now and enter another round of sleeping beauty experiment. Instead of the two potential awakenings 24 hour apart, this time they are 12 hours apart. So this new experiment fit into 1 day, beauty would not experience the memory wipe from the original experiment. (Here I’m assuming the actual awakening and interviewing takes no time, for the ease of expression). Again in the first awakening she would be given allergic cookies and the second awakening good cookies. When she wakes up she would be facing the same choice again. We can repeat the experiment further with later iterations’ awakenings closer and closer. But there is no strategy to maximize a “beauty in the moments” overall pleasure. (Here it shows why I want to use the cloning example, because to repeat the sleeping beauty experiment from beauty’s first-person perspective is very messy. And question such as if the pain is completely forgotten does it still matter comes into question.)
I don’t think the issue of whether “cloning” is possible is actually crucial for this discussion, but since this relates to a common lesswrong sort of assumption, I’ll elaborate on it. I do think that making a sufficiently accurate copy is probably possible in principle (but obviously not now, and perhaps never, in practice). However, I don’t think this has been established. It seems conceivable that quantum effects are crucial to consciousness—certainly physics gives us no fundamental reason to rule this out. If this is true, then “cloning” (not the usual use of the word) by measuring the state of someone’s body and then constructing a duplicate will not work—the measurement will not be adequate to produce a good copy This possibility is compatible with there being some very good memory-erasing drug, which need only act on the quantum state of the person in a suitable way, without “measuring” it in its entirety. So I don’t agree with your statement that “if sleeping beauty problem is physically possible the cloning example must be as well”. And even if true in principle, there is a vast difference in practice between developing a slightly better amnesia drug—I wouldn’t be surprised if this was done tomorrow—and developing a way of measuring the state of someone’s brain accurately enough to produce a replica, and then also developing a way of constructing such a replica from the measurement data - my best guess is that this will never be possible.
This practical difference relates to a different sense in which your cloning example is “fantastic”. Even if we were sure that it was possible principle to “clone” people, we should not be sure that the methods of reasoning that we have evolved (biologically and culturally) will be adequate in a situation where this sort of thing happens. It would be like asking a human from 100,000 years ago to speculate on the social effects of Twitter. With social experience confined to a tribe of a dozen closely-related people, with occasional interactions with a few other tribes, not to mention a total ignorance of the relevant technology, they would be utterly incompetent to reason about how billions of people will interact when reacting to online text and video postings.
In this discussion, I get the impression that considering fantastical things like cloning leads you to discard common-sense realism. Uncrictially apply our current common-sense notions might indeed be invalid in a world where you can be duplicated—with the duplicate having perhaps first had its memories maliciously edited. There are lots of interesting, and difficult, issues here. But these are not issues that need to be settled in order to settle the Sleeping Beauty problem!
In your cloning example, you abandon common sense realism for no good reason. Since you talk about an original versus the clone, I take it that you see the experimenters as measuring the state of the original, without substantially disturbing it, and then creating a copy (as opposed to using a destructive measurement process, and then creating two copies, since then there is obviously no “original” left). In this situation, the distinction between the original and the copy is completely clear to any observer of the process. When they wake up, both the copy and the original do not know whether or not they are the original, but nevertheless one is the original and one is not. They can find out simply by asking the observer (and of course there are other possible ways—as is true for any fact about the world). Before they find out, they can assess the probability that they are the original, if that amuses them, or is necessary for some purpose. Nothing about this situation justifies abandoning the usual idea that probabilities regarding facts about the world are meaningful.
Regarding the cookies, you say “there is no strategy to maximize a “beauty in the moments” overall pleasure”. So once again, I ask: How is Beauty supposed to decide whether to eat a cookie or not?
You mentioned if our consciousness is quantum state dependent then creating a clone with indistinguishable memory would be impossible. (Because to duplicate someone’s memory would require complete information about his current quantum state, if I understand correctly) But at the same time you said sleeping beauty experiment is still possible since memory erasing only requires acting on the quantum state of the person without measuring it in its entirety. But wouldn’t the action’s end goal to revert the current state to a previous (Sunday nights) one? It would ultimately require beauty’s quantum state to be measured at Sunday night. Unless there is some mechanics to exactly reverse the effect of time on something. But that to me appears even more unrealistic. I do agree that the practical difficulty between the two experiment is different. Cloning with memory does require more advanced technology to carry out. However I think that does not change how we analysis the experiments or affect probability calculations. Furthermore, I do not think this difference in technical difficulty means we are too primitive to ponder about the cloning example while sleeping beauty problem is fair game.
The reason I bring out the cloning example is because it makes my argument a lot easier to express than using the sleeping beauty problem. You think the two problem are significantly different because one may be impossible in theory the other problem is definitely feasible. So I felt obligated to show the two problems are similar especially concerning theoretical feasibilities. If you don’t feel the theoretical feasibility is crucial to the discussion I’m ok to drop it from here on. One thing I want to point out is that all argument made by using the cloning experiment can be made by using the sleeping beauty problem. It is just that the expression would be very longwinded and messy.
You mentioning that no matter how we put it one of the copies is the original while the other is the clone. Again I agree with that. I am not arguing “I am the original” is a meaningless statement. I am arguing “the probability of me being the original” is invalid. And it is not because being the original or the clone makes no difference to the participant. But because in this question the first-person self explanatory concept of “me” should not be used. From the participant’s first-person perspective imagine repeating this experiment. You fall asleep and undergone the cloning and wake up again. After this awakening you can guess again whether you are the original for this new experiment. This process can be repeated as many times as you want. Now we have a series of experiment that you have first-person subjective experience. However there is no reason the relative frequency of you being the original in this series of experiments would converge to any particular value.
Of course one could argue the probability must be half because half of the resulting copies are original and the other half is the clone. However this reasoning is actually thinking from the perspective of an outsider. It treats the resulting clones as entities from the same reference class. So it is in conflict with using the first-person “me” in question. This reasoning is applicable if the entity in question is singled out among the copies from a third-person perspective, e.g. “the probability of a randomly selected copy being original.” Whereas the process described in the previous paragraph is strictly from the participants first-person perspective and inline with the use of first-person “me”.
Now we can modify the experiment slightly such that the cloning only happens if a coin toss land on Tails. This way it exactly mirrors the sleeping beauty problem. After wake up we can give each of them a cookie that is delicious to the original but painful to the clone. Because from first-person perspective repeating the experiment would not converge to a relative frequency there is no way to come up with an strategy for the participant to decide whether or not to eat them that will benefit “me” in the long run. In another word if beauty’s only concern is the subjective pleasure and pain of the apparent first-person “me”, then probability calculation could not help her to make a choice. Beauty have no rational way of deciding to eat the cookie or not.
Regarding cloning, we have very good reason to think that good-enough memory erasure is possible, because this sort of thing happens in reality—we do forget things, and we forget all events after some traumas. Moreover, there are plausible paths to creating a suitable drug. For example, it could be that newly-created memories in the hippocampus are stored in molecular structures that do not have various side-chains that accumulate with time, so a drug that just destroyed the molecules without these side-chains would erase recent memories, but not older ones. Such a drug could very well exist even if consciousness has a quantum aspect to it that would rule out duplication.
I don’t see how your argument that the first person “me” perspective renders probability statements “invalid” can apply to Sleeping Beauty problems without also rendering invalid all uses of probability in practice. When deciding whether or not to undergo some medical procedure, for example, all the information I have about its safety and efficacy is of “third-person” form. So it doesn’t apply to me? That can’t be right.
It also can’t be right that Beauty has “no rational way of deciding to eat the cookie or not”. The experiment iis only slightly fantastical, requiring a memory erasing drug that could well be invented tomorrow, without that being surprising. If your theory of rationality can’t handle this situation, there is something seriously wrong with it.
I think while comparing cloning and sleeping beauty problem you are not holding them to the same standard. You said we have good reason to think that “good-enough” memory erasure is possible. By good-enough I think you meant the end result might not be 100% same from a previous mental state but the difference is too small for human cognitives to notice. So I think when talking about cloning the same leniency should be given and we shouldn’t insist on a exact quantum copy either. You also suggested if our mental state is determined by our brain structure at a molecular level then it can be easily revered. But then suggests cloning would be impossible if our mind is determined by the brain at a quantum level. If our mind is determined at a quantum level simply reverting the molecular structure would not be enough to recreate a previous mental state either. I feel you are giving the sleeping beauty problem a easy pass here.
Why would the use of first-person me render all use of probability invalid? Regarding the risk of a medical procedure we are talking about an event with different possible outcomes that we cannot reliably predict for certain. Unlike the color of the eyes example you presented earlier this uncertainty can be well understood from the first-person perspective. For example when talking about the probability of winning the lottery you can interpret it from the third-person perspective and say if everyone in the world enters then only one person would win. But it is also possible to interpret it from the first-person perspective and say if I buy 7 billion tickets I would have only 1 winning ticket (or if I enter the same lottery 7 billion times I would only win once). They both work. Imagine while repeating the cloning experiment, after each wake up you toss a fair coin before going back to sleep again for the next repetition of cloning. As the number of repetitions increases the relative frequency of Heads of the coin tosses experienced by “me” would approach 1⁄2. However there is no reason the relative frequency of “me” being the original would converge to any value as the number of repetitions increase.
The reason there is no way to decide on whether or not to eat the cookie is because the only objective is to maximize the pleasure of the self-explanatory “me” and the reward is linked to “me” being the original. Not only my theory cannot handle the situation. I am arguing the situation is setup in a way no theory could handle it. People claiming beauty can make a rational decision is either changing the objective (e.g. be altruistic towards other copies instead of just the simple self) or did not use the first-person “me” (e.g. trying to maximize the pleasure of the person defined by some feature or selection process instead of this self-explanatory me).
Sleeping Beauty with cookies is an almost-realistic situation. I could easily create an analogous situation that is fully realistic (e.g., by modifying my Sailor’s Child problem). Beauty will decide somehow whether or not to eat a cookie. If Beauty has no rational basis for making her decision, then I think she has no rational basis for making any decision. Denial of the existence of rationality is of course a possible position to take, but it’s a position that by its nature is one that it is not profitable to try to discuss rationally.
Beauty can make a rational decision if she changes the objective. Instead of the first-person apparent “I” if she try to maximize the utility of a person distinguishable by a third-person then a rational decision can be made. The problem is that in almost all anthropic school of thought the first-person center is used without discrimination. E.g. in sleeping beauty problem the new evidence is I’m awake “today”. In Doomsday argument it considers “my” birth rank. In SIA’s rebuttal to Doomsday Argument the evidence supporting more observers is that “I” exist. In such logics it doesn’t matter when you read the argument the “I” in your mind is a different physical person from the “I” in my mind when I read the same argument. Since the “I” or “Now” is defined by first-person center in their logic it should be used the same way in the decision making as well. The fact a rational decision cannot be made while using the self-apparent “I” only shows there is a problem with the objective. That using the self-apparent concept of “I” or “Now” indiscriminately in anthropic reasoning is wrong.
Actually in this regard my idea is quite similar to your FNC. Of course there are obvious differences. But I think a discussion of that deserves another thread.
I got a feeling that our discussion here is coming to an end. While we didn’t convince each other, as expected for any anthropic related discussion, I still feel I have gained something out of it. It forced me to try to think and express more clearly and better structure my argument. I also want to think I have a better understanding of potential counter arguments. For that I want to express my gratitude
I’m afraid this makes no sense to me. I think this comes from my not understanding how the concept of a “reference class” can possible work. So I have no idea what it could mean to “observe the world from the perspective of any human that is male”, if observing from that “perspective” is supposed to change the probability (or render the probability meaningless) of some statement that I would take to be about the actual, real, world.
As I’ve pointed out before, the Sleeping Beauty problem is only barely a thought experiment—with a slight improvement over current memory-affecting drugs, it would be possible to actually run the experiment. It’s not like a thought experiment involving hypothetical computer simulation of people’s brains, or some such, in which one might perhaps think that common sense reasoning is not applicable.
So consider an actual run of the experiment. Suppose that at the time Beauty agrees to take part in the experiment, she fails to remember that she had already agreed to participate in a different experiment on Monday afternoon. The Sleeping Beauty experimenters have promised to pay her $120 if she completes their experiment, while the other experimenters have promised to pay her $120+X, and her motivation is to maximize the expected amount of her earnings. On some awakening during the Sleeping Beauty experiment, Beauty realizes that she had forgotten about the other experiment, and considers leaving to go participate in it. Of course, she then wouldn’t get the $120 for participating in the Sleeping Beauty experiment, but if it’s Monday, she would get the $120+X for participating in the other experiment. Now if it’s Tuesday, the other experiment has already been cancelled. So she needs to consider the probability that it’s Monday in order to make a good decision.
It’s not actually relevant to my point, but here is how it seems to me the probabilities work out. Suppose that Beauty has probability p of remembering the other experiment whenever she awakens, and suppose that this is independent for two awakenings (as is consistent with the assumption that her mental state is reset before a second awakening). To simplify matters, let’s suppose (and suppose that Beauty also supposes) that p is quite small, so the probability of Beauty remembering the other experiment on both awakenings (if two happen) is negligible.
Since p is small, Beauty’s probability for it being Monday given that she has woken and remembered the other experiment should be essentially as usual for this problem, with the answer depending on whether she is a Halfer or a Thirder. (If p were not small, she might need to downgrade the probability of Tuesday because there might be a non-negligible chance that she would have left the experiment on Monday, eliminating the Tuesday awakening.)
If she’s a Thirder, when she wakes and remembers the other experiment, she will consider the probability that it is Monday to be 2⁄3, and will leave for the other experiment if (2/3)(120+X) is greater than 120, that is, if X is greater than 60. If she is a Halfer, it’s harder to say, since Halferism is wrong, but let’s suppose that she splits the 1⁄2 probability of two awakenings equally, and hence thinks the probability of it being Monday is 3⁄4. She will then leave if (3/4)(120+X) is greater than 120, that is, if X is greater than 40. We can also look at things from a frequentist perspective, and ask what her expected payment is if she always decides to leave when she remembers the other experiment. It will be (1-p)120 + p(120+X) conditional on the coin landing Heads, and (1-p)(1-p)120 + p(120+X) conditional on the coin landing Tails, for a total expectation of (1-(3/2)p)120 + p(120+X), ignoring the p-squared term as being negligible. This simplifies to (1-p/2)120 + pX, which is greater than 120 if X is greater than 60, in agreement with the Thirder reasoning.
In any case, though, if I’ve understood you correctly, you deny that there is any meaning to “the probability that it’s Monday” in this situation. So how is Beauty supposed to decide what to do?
I do find some of the ideas related to anthropic reasoning hard to express. Let me try another expression and see if it is better. “The probability of me being a man” in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. This is what I meant by “experiencing the world from the perspective of a man”. I’m think even though “I’m a man” is a valid statement “the probability of me being a man” does not exist. It might be tempting to say the probability is simply 1⁄2. But that implies I can only be a man or a women. The probability of me being anything else such as a chimpanzee or an alien is zero. There is no basis for that. Further problems involves the possibility of me not even born into this world at all. Trying to assign a value to this probability is impossible. Because to construct a sample space a reference class containing “me” is needed. However from the first-person perspective this “me” is defined by the perspective center. It is inherently unique. i. e. there is nothing else in its reference class. So a first-person identity cannot be used in such questions. Someone has to be identified from a third-person perspective instead. Because from a third-person perceptive no one is inherently special, so to specify someone would involve a process to single it out. Information about this process would determined the reference class. This is why both SSA and SIA argues the first-person “me” is conceptually equivalent to someone randomly selected from a certain group. Because that way it gives a reference class subsequently allows them to assign a value to such probabilities. However this equivalency means mixed reasoning from the two perspectives. Using the first-person “me” interchangeably with the proposed third-person identity leads to other anthropic paradoxes.
I do agree that sleeping beauty experiment is physically possible. For the experiment to take place the memory wipe doesn’t even need to be perfect. As long as it is accurate enough to fool the human mind it will work. Since human mind can only contain finite amount of information nothing in theory denies its feasibility. I also appreciate you present your argument by experiments and numbers. I find discussing clearly defined examples much easier. Allow me to explain our differences.
First of all I agree with the calculations (with the exceptions of frequentist repetitions which I discusses in Section 4). Secondly I also agree maximizing someone’s own earning would force the decisions to reflect the probability. Our differences is regarding how should the reward be handled. The whole reason sleeping beauty problem is related to anthropic reasoning is because it involves an observer duplication. That is with the memory wipe Monday Beauty and Tuesday Beauty are two separate entities (at least from their own perspectives). So they should have distinct rewards. Monday beauty’s correct decision should benefit Monday Beauty alone and Tuesday Beauty’s wrong decision should punish Tuesday Beauty only. In the example you presented that is not the case. The potential reward that is always given to whoever exists at the end of the experiment. In this kind of setup even when beauty experiences the status reset her rewards never do. It is like her money doesn’t participates in the duplicating experiment as she does. I disagree with this discrepancy. In this question beauty is no longer trying to maximize the reward to the obvious first-person “me” but to maximize the cumulated reward at the end of the experiment. This shift in objective means instead of strategizing directly from the first-person perspective beauty should strategize from the perspective of a non participant, i.e. a third person. So instead of using first-person center to define a self-explanatory “today”, the specific day in question is defined in third person. For example, it is calculating the probability that the day beauty remembers the experiment is a Monday.
Constructing a sleeping beauty experiment with appropriate rewards and repetitions is quite troublesome. (for example questions such as “if you do not have a chance to spend the money is it still a reward” comes into play). I want to present my argument in a different but also physically feasible experiment. Suppose when you go to sleep tonight a clone of you would be created and put into an identical room. The clone is highly accurate it retains the memory good enough so he fully believes he is the one fall asleep yesterday. After waking up in the experiment you ask yourself “what is the probability that I am the original?”. My position is there is no such probability. Notice the “I” here is the self-explanatory “I” from first-person perspective. If a third-person specification is used instead, e.g. “the probability of a randomly chosen copy being the original”, then there is no mix of perspectives and it is obviously valid. Now suppose if guessed correctly a reward of $1 would be given to you. The experiment is repeated many times. I.e. When fall asleep again on the second day another clone would be created. Same happens the third day, etc. During this process your money is cloned along with you. Everyday you get a chance to guess if you are the original regarding the previous night. Here to earn the maximum reward for yourself the only consideration is to guess correctly. I argue there is no strategy for that objective. Again notice I’m not trying to come up with a strategy that would maximize the total or average money owned by all copies. The objective is much more direct: to maximize the money owned by me, myself.
“The probability of me being a man” in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. … even though “I’m a man” is a valid statement “the probability of me being a man” does not exist
Here, you have imported some highly questionable ideas, which would seem to be not at all necessary for analysing the Sleeping Beauty problem. This is my core objection to how Sleeping Beauty is used—it’s an almost-non-fantastical problem that people take to have implications for these sorts of anthropic arguments, but when correctly analysed, it does not actually say anthing about such issues, except in a negative sense of revealing some arguments to be invalid.
You should also note that your use of “probability” here does not correspond to any use of this word in normal reasoning. To see this, consider “the probability of my having blue eyes”. It take this to be in the same class as “the probability of me being a man”, but it allows for less-ridiculous thought experiments. Suppose you are a member of an isolated desert tribe. There are no mirrors, and no pools of water in which you could see your reflection. The tribe also has a strong taboo against telling anyone what colour their eyes are. So you don’t know what colour your eyes are. Do you maintain that “the probability that my eyes are blue” does not exist? Can’t you look at the other members of the tribe, see what fraction have blue eyes, and take that as the probability that you have blue eyes? Note that this may have practical implications regarding how much care you should take to avoid sun exposure, to reduce your chance of developing glaucoma.
I assume that you do think “the probability that my eyes are blue” is meaningful in this scenario. You seem to have in mind only something like prior probabilties, not conditional on any observations. But all actual practical uses of probability are conditional on observations, so your discussion is reminescent of the proverbial question of “how many angels can dance on the head of a pin?”.
I also agree maximizing someone’s own earning would force the decisions to reflect the probability.
I’m not sure what exactly you’re agreeing about here. Do you maintain that “the probability that it is Monday” does not exist, until Beauty happens to remember the other experiment, at which point it suddenly becomes meaningful? If so, why can’t Beauty just magine that there is some such practical reason to want to know whether it is Monday, calculate what the probability is, and then take that to be the probability of it being Monday even though she doesn’t actually need to make a decision for which that probability would be needed? Seems better than claiming that the probability doesn’t exist, even though this procedure gives it a well-defined value...
The whole reason sleeping beauty problem is related to anthropic reasoning is because it involves an observer duplication. … So they should have distinct rewards. Monday beauty’s correct decision should benefit Monday Beauty alone...
There’s a methodological issue here. I’ve presented a variation on Sleeping Beauty that I claim shows that “the probability that it’s Monday” has to be a meaningful concept for Beauty. You say, “but if I look at a different variation, that arguement doesn’t go through.” Why should that matter, though? If my variation shows that the probability is meaningful, that should be enough. If this shows that Sleeping Beauty is not related to anthropic reasoning, so be it.
However, there’s no problem making the reward be for “Beauty in the moment”. Suppose that when Beauty wakes up, she sees a plate of cookies. She recognizes them as being freshly baked by a bakery she knows. She also knows that on Mondays, but not Tuesdays, they put an ingredient in the cookies to which she is mildly allergic, causing immediate, painful stomach cramps. She also knows that the cookies are quite delicious. Should she eat a cookie? Adjust the magnitudes of possible pleasure and pain as desired to make the question interesting. Shouldn’t the probability of it being Monday be meaningful?
Suppose when you go to sleep tonight a clone of you would be created and put into an identical room. The clone is highly accurate it retains the memory good enough so he fully believes he is the one fall asleep yesterday.
Note that this is now a completely fantastical thought experiment, in contrast to the usual Sleeping Beauty problem. It may be impossible in principle, given the quantum no-cloning theorem. I also don’t know how this is supposed to work in conjunction with your previous reference to “souls”. I don’t think this extreme variation actually shows anything interesting, but if it did, you’d need to ask yourself whether the need to resort to this fantasy indicates that you’re in “angels dancing on the head of a pin” territory.
Thank you for the speedy reply professor. I was worried that with my slow response you might have lost interest in the discussion.
Forgive me for not discussing the issues in the order you presented. But I feel the most important argument I want to challenge is that sleeping beauty problem is physically possible but the cloning experiments are strictly fantastical.
In the cloning experiment the goal is not to make a physically exact copy of someone but to make the copy accurate enough that a human could not differentiate. Which is no different from the sleeping beauty problem. Considering the limitations of human cognitive ability and memory this doesn’t remotely require a exact quantum state copy. Unless you take the position that the human memory is so sophisticated it is quantum state dependent. But then it means to revert beauty’s memory to a earlier state would require her brain to change back to a previous quantum state. Complete information about that quantum state cannot be obtained unless she is destroyed at the time. I.E. Sleeping beauty would be against no-cloning theorem thus non-feasible as well. Apart from memory there is also the problem of physical differences. It is understood during the first day beauty would inevitably undergone some physical changes. E.g. her hair may grow, her skin is aging a tiny bit, etc. This is not considered a problem for the experiment because it is understood human cannot pick on such minuscule details. So even with these physical changes beauty would still think this is her first awakening after waking up the second time. The same principle applies to the cloning example. As long as the copy is physically close enough for human’s cognitive ability to not notice any differences the clone would believe he is the original. In summary if sleeping beauty problem is physically possible the cloning example must be as well. In this problem after waking up in the experiment the “probability of me being the original” makes no sense. Even if you consider repeating the experiment many times there is no answer to it. Again it is referring to the primitively understood first-person “me” not someone identified form a third-person perspective such as “the probability of a randomly selected copy being the original.”
As for the question of my soul getting incarnated into somebody. This is not my idea. Various anthropic school of thoughts lead to such an expression. For example in Doomsday Argument’s prior probability calculation SSA argues I am equally likely to be born as any human ever exists. SIA adds on top of it and suggests I am more likely to be born into this world if more human ever exists. They both closely resemble the idea of soul embodiment into a pool of candidate. I mentioned this expression because it neatly describes what “the probability of me being a man” refers to in the anthropic context. And let’s not loose the big picture here that I am arguing such probabilities do not make sense. So I complete agree with you that using soul embodiment as an experiment to assign probability is highly questionable. In fact I am arguing such notions are outright wrong.
Regarding the case of eye color quite clearly we are not discussing anything resembling the above idea. By surveying other people in the tribe I would know what percentage of the tribesman have blue eyes. If we say that percentage is the probability of someone having blue eyes then there is an underlying assumption this someone is an ordinary member of such tribe. He is not special. This is against the first-person perspective where I am inherently different from anybody else. Meaning this person is identified among the tribesman from a third-person perspective. Therefore that percentage is not the probability the first-person “I” have blue eyes. But rather the probability of an randomly selected tribesman have blue eyes. A optimal level of avoiding sun exposure can be derived from that survey number. However it cannot be said that strategy is optimal for myself. All we know is that if every tribesman follows this strategy then it would be optimal for the tribe as a whole.
I think by using a betting argument there is an underlying assumption that someone trying to maximize her own earning would follow a strategy determined by the correct probability. This I agree. However that is when the decision maker and the reward receiver is the same person. It is to say if beauty is contemplating the probability of “today” being Monday, then the reward for a correct guess should be given to today’s beauty. That’s what I meant by Monday Beauty’s correct decision should reward Monday Beauty alone. In the setup you presented that is not the case. In your setup the objective is to maximize the accumulated earnings. For this objective the concept of a self-explanatory “today” is never used. So the calculation is not reflecting the probability of “today being Monday”. But rather reflecting the probability that “the day beauty remembers the previous experiment is Monday”. Essentially it has the same problem as the eye color example. The first-person center concept of “today” is switched to a third-person identity. If we go back to the cloning experiment, you are arguing after waking up “the probability of a randomly selected copy being original” is valid and meaningful. I agree with this. I am arguing using first-person center “the probability ‘me’ being the original” do not exist.
For the cookie experiment yes the painful reaction and delicious bliss are of course meaningful. But it only means “today is Monday” and “today is Tuesday” are both meaningful to her. This I never argued against. However if a probability of “today is Monday” exists then there should be an optimal strategy for “beauty in the moment” to maximize her pleasure. Notice strategies exists to maximize the pleasure throughout the two day experiment. Strategies also exists to optimize the pleasure of beauty exists at the end of experiment. But there is no strategy to maximize the pleasure of this self apparent “beauty in the moment”. We can even repeat the experiment for this “today’s beauty”. Let her sleep now and enter another round of sleeping beauty experiment. Instead of the two potential awakenings 24 hour apart, this time they are 12 hours apart. So this new experiment fit into 1 day, beauty would not experience the memory wipe from the original experiment. (Here I’m assuming the actual awakening and interviewing takes no time, for the ease of expression). Again in the first awakening she would be given allergic cookies and the second awakening good cookies. When she wakes up she would be facing the same choice again. We can repeat the experiment further with later iterations’ awakenings closer and closer. But there is no strategy to maximize a “beauty in the moments” overall pleasure. (Here it shows why I want to use the cloning example, because to repeat the sleeping beauty experiment from beauty’s first-person perspective is very messy. And question such as if the pain is completely forgotten does it still matter comes into question.)
I don’t think the issue of whether “cloning” is possible is actually crucial for this discussion, but since this relates to a common lesswrong sort of assumption, I’ll elaborate on it. I do think that making a sufficiently accurate copy is probably possible in principle (but obviously not now, and perhaps never, in practice). However, I don’t think this has been established. It seems conceivable that quantum effects are crucial to consciousness—certainly physics gives us no fundamental reason to rule this out. If this is true, then “cloning” (not the usual use of the word) by measuring the state of someone’s body and then constructing a duplicate will not work—the measurement will not be adequate to produce a good copy This possibility is compatible with there being some very good memory-erasing drug, which need only act on the quantum state of the person in a suitable way, without “measuring” it in its entirety. So I don’t agree with your statement that “if sleeping beauty problem is physically possible the cloning example must be as well”. And even if true in principle, there is a vast difference in practice between developing a slightly better amnesia drug—I wouldn’t be surprised if this was done tomorrow—and developing a way of measuring the state of someone’s brain accurately enough to produce a replica, and then also developing a way of constructing such a replica from the measurement data - my best guess is that this will never be possible.
This practical difference relates to a different sense in which your cloning example is “fantastic”. Even if we were sure that it was possible principle to “clone” people, we should not be sure that the methods of reasoning that we have evolved (biologically and culturally) will be adequate in a situation where this sort of thing happens. It would be like asking a human from 100,000 years ago to speculate on the social effects of Twitter. With social experience confined to a tribe of a dozen closely-related people, with occasional interactions with a few other tribes, not to mention a total ignorance of the relevant technology, they would be utterly incompetent to reason about how billions of people will interact when reacting to online text and video postings.
In this discussion, I get the impression that considering fantastical things like cloning leads you to discard common-sense realism. Uncrictially apply our current common-sense notions might indeed be invalid in a world where you can be duplicated—with the duplicate having perhaps first had its memories maliciously edited. There are lots of interesting, and difficult, issues here. But these are not issues that need to be settled in order to settle the Sleeping Beauty problem!
In your cloning example, you abandon common sense realism for no good reason. Since you talk about an original versus the clone, I take it that you see the experimenters as measuring the state of the original, without substantially disturbing it, and then creating a copy (as opposed to using a destructive measurement process, and then creating two copies, since then there is obviously no “original” left). In this situation, the distinction between the original and the copy is completely clear to any observer of the process. When they wake up, both the copy and the original do not know whether or not they are the original, but nevertheless one is the original and one is not. They can find out simply by asking the observer (and of course there are other possible ways—as is true for any fact about the world). Before they find out, they can assess the probability that they are the original, if that amuses them, or is necessary for some purpose. Nothing about this situation justifies abandoning the usual idea that probabilities regarding facts about the world are meaningful.
Regarding the cookies, you say “there is no strategy to maximize a “beauty in the moments” overall pleasure”. So once again, I ask: How is Beauty supposed to decide whether to eat a cookie or not?
You mentioned if our consciousness is quantum state dependent then creating a clone with indistinguishable memory would be impossible. (Because to duplicate someone’s memory would require complete information about his current quantum state, if I understand correctly) But at the same time you said sleeping beauty experiment is still possible since memory erasing only requires acting on the quantum state of the person without measuring it in its entirety. But wouldn’t the action’s end goal to revert the current state to a previous (Sunday nights) one? It would ultimately require beauty’s quantum state to be measured at Sunday night. Unless there is some mechanics to exactly reverse the effect of time on something. But that to me appears even more unrealistic. I do agree that the practical difficulty between the two experiment is different. Cloning with memory does require more advanced technology to carry out. However I think that does not change how we analysis the experiments or affect probability calculations. Furthermore, I do not think this difference in technical difficulty means we are too primitive to ponder about the cloning example while sleeping beauty problem is fair game.
The reason I bring out the cloning example is because it makes my argument a lot easier to express than using the sleeping beauty problem. You think the two problem are significantly different because one may be impossible in theory the other problem is definitely feasible. So I felt obligated to show the two problems are similar especially concerning theoretical feasibilities. If you don’t feel the theoretical feasibility is crucial to the discussion I’m ok to drop it from here on. One thing I want to point out is that all argument made by using the cloning experiment can be made by using the sleeping beauty problem. It is just that the expression would be very longwinded and messy.
You mentioning that no matter how we put it one of the copies is the original while the other is the clone. Again I agree with that. I am not arguing “I am the original” is a meaningless statement. I am arguing “the probability of me being the original” is invalid. And it is not because being the original or the clone makes no difference to the participant. But because in this question the first-person self explanatory concept of “me” should not be used. From the participant’s first-person perspective imagine repeating this experiment. You fall asleep and undergone the cloning and wake up again. After this awakening you can guess again whether you are the original for this new experiment. This process can be repeated as many times as you want. Now we have a series of experiment that you have first-person subjective experience. However there is no reason the relative frequency of you being the original in this series of experiments would converge to any particular value.
Of course one could argue the probability must be half because half of the resulting copies are original and the other half is the clone. However this reasoning is actually thinking from the perspective of an outsider. It treats the resulting clones as entities from the same reference class. So it is in conflict with using the first-person “me” in question. This reasoning is applicable if the entity in question is singled out among the copies from a third-person perspective, e.g. “the probability of a randomly selected copy being original.” Whereas the process described in the previous paragraph is strictly from the participants first-person perspective and inline with the use of first-person “me”.
Now we can modify the experiment slightly such that the cloning only happens if a coin toss land on Tails. This way it exactly mirrors the sleeping beauty problem. After wake up we can give each of them a cookie that is delicious to the original but painful to the clone. Because from first-person perspective repeating the experiment would not converge to a relative frequency there is no way to come up with an strategy for the participant to decide whether or not to eat them that will benefit “me” in the long run. In another word if beauty’s only concern is the subjective pleasure and pain of the apparent first-person “me”, then probability calculation could not help her to make a choice. Beauty have no rational way of deciding to eat the cookie or not.
Regarding cloning, we have very good reason to think that good-enough memory erasure is possible, because this sort of thing happens in reality—we do forget things, and we forget all events after some traumas. Moreover, there are plausible paths to creating a suitable drug. For example, it could be that newly-created memories in the hippocampus are stored in molecular structures that do not have various side-chains that accumulate with time, so a drug that just destroyed the molecules without these side-chains would erase recent memories, but not older ones. Such a drug could very well exist even if consciousness has a quantum aspect to it that would rule out duplication.
I don’t see how your argument that the first person “me” perspective renders probability statements “invalid” can apply to Sleeping Beauty problems without also rendering invalid all uses of probability in practice. When deciding whether or not to undergo some medical procedure, for example, all the information I have about its safety and efficacy is of “third-person” form. So it doesn’t apply to me? That can’t be right.
It also can’t be right that Beauty has “no rational way of deciding to eat the cookie or not”. The experiment iis only slightly fantastical, requiring a memory erasing drug that could well be invented tomorrow, without that being surprising. If your theory of rationality can’t handle this situation, there is something seriously wrong with it.
I think while comparing cloning and sleeping beauty problem you are not holding them to the same standard. You said we have good reason to think that “good-enough” memory erasure is possible. By good-enough I think you meant the end result might not be 100% same from a previous mental state but the difference is too small for human cognitives to notice. So I think when talking about cloning the same leniency should be given and we shouldn’t insist on a exact quantum copy either. You also suggested if our mental state is determined by our brain structure at a molecular level then it can be easily revered. But then suggests cloning would be impossible if our mind is determined by the brain at a quantum level. If our mind is determined at a quantum level simply reverting the molecular structure would not be enough to recreate a previous mental state either. I feel you are giving the sleeping beauty problem a easy pass here.
Why would the use of first-person me render all use of probability invalid? Regarding the risk of a medical procedure we are talking about an event with different possible outcomes that we cannot reliably predict for certain. Unlike the color of the eyes example you presented earlier this uncertainty can be well understood from the first-person perspective. For example when talking about the probability of winning the lottery you can interpret it from the third-person perspective and say if everyone in the world enters then only one person would win. But it is also possible to interpret it from the first-person perspective and say if I buy 7 billion tickets I would have only 1 winning ticket (or if I enter the same lottery 7 billion times I would only win once). They both work. Imagine while repeating the cloning experiment, after each wake up you toss a fair coin before going back to sleep again for the next repetition of cloning. As the number of repetitions increases the relative frequency of Heads of the coin tosses experienced by “me” would approach 1⁄2. However there is no reason the relative frequency of “me” being the original would converge to any value as the number of repetitions increase.
The reason there is no way to decide on whether or not to eat the cookie is because the only objective is to maximize the pleasure of the self-explanatory “me” and the reward is linked to “me” being the original. Not only my theory cannot handle the situation. I am arguing the situation is setup in a way no theory could handle it. People claiming beauty can make a rational decision is either changing the objective (e.g. be altruistic towards other copies instead of just the simple self) or did not use the first-person “me” (e.g. trying to maximize the pleasure of the person defined by some feature or selection process instead of this self-explanatory me).
Sleeping Beauty with cookies is an almost-realistic situation. I could easily create an analogous situation that is fully realistic (e.g., by modifying my Sailor’s Child problem). Beauty will decide somehow whether or not to eat a cookie. If Beauty has no rational basis for making her decision, then I think she has no rational basis for making any decision. Denial of the existence of rationality is of course a possible position to take, but it’s a position that by its nature is one that it is not profitable to try to discuss rationally.
Beauty can make a rational decision if she changes the objective. Instead of the first-person apparent “I” if she try to maximize the utility of a person distinguishable by a third-person then a rational decision can be made. The problem is that in almost all anthropic school of thought the first-person center is used without discrimination. E.g. in sleeping beauty problem the new evidence is I’m awake “today”. In Doomsday argument it considers “my” birth rank. In SIA’s rebuttal to Doomsday Argument the evidence supporting more observers is that “I” exist. In such logics it doesn’t matter when you read the argument the “I” in your mind is a different physical person from the “I” in my mind when I read the same argument. Since the “I” or “Now” is defined by first-person center in their logic it should be used the same way in the decision making as well. The fact a rational decision cannot be made while using the self-apparent “I” only shows there is a problem with the objective. That using the self-apparent concept of “I” or “Now” indiscriminately in anthropic reasoning is wrong.
Actually in this regard my idea is quite similar to your FNC. Of course there are obvious differences. But I think a discussion of that deserves another thread.
I got a feeling that our discussion here is coming to an end. While we didn’t convince each other, as expected for any anthropic related discussion, I still feel I have gained something out of it. It forced me to try to think and express more clearly and better structure my argument. I also want to think I have a better understanding of potential counter arguments. For that I want to express my gratitude