That’s not it. In your simulation you give equal chances for Head and Tails, and then subdivide Tails into two equiprobables of T1 and T2 while keeping all probability of Heads as H1. It’s essentially a simulation based on SSA. Thirders would say that is the wrong model because it only considers cases where the room is occupied: H2 never appeared in your model. Thirders suggests there is new info when waking up in the experiment because it rejects H2. So the simulation should divide both Head and Tails into equiprobables of H1 H2 T1 T2. And waking up rejects H2 which pushes P(T) to 2⁄3. And then learning it is room 1 would push it back down to 1⁄2.
dadadarren
To thirders, your simulation is incomplete. It should first include randomly choosing a room and finding it occupied. That will push the probability of Tails to 2⁄3. Knowing it is room 1 will push it back to 1⁄2.
One thing that should be noted is that while Adam’s argument is influential, especially since it first (to my knowledge) pointed out halfers have to either reject Bayesian updating upon learning it is Monday or accept a fair coin yet to be tossed has the probability other than 1⁄2. Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, which has more awakenings. Therefore targeting Adam’s argument specifically is not very effective.
In your incubator experiment, thirders, in general, would find no problem: waking up, evidence favouring tails P(T)=2/3. Finding it is room 1: evidence favouring Heads, P(T) decreased to 1⁄2.
Here is a model that might interest halfers. You participate in this experiment: the experimenter tosses a fair coin, if Heads nothing happens, you sleep through the night uneventfully. If Tails they will split you in the middle into two halves, completing each half by cloning the missing part onto it. The procedure is accurate enough that the memory is preserved in both copies. Imagine yourself waking up the next morning: you can’t tell if anything happened to you, if either of your halves is the same physical piece yesterday, or if there is another physical copy in another room. But regardless, you can participate in the same experiment again. The same thing happens when you find yourself waking up the next day. and so on.....As this continues, you will count about an equal number of Heads and Tails in the experiments you have subjective experiences of...
Counting subjective experience does not necessarily lead to Thirderism.
I would also point out that FNC is not strictly a view-from-nowhere theory. The probability updates it proposes are still based on an implicit assumption of self-sampling.
I really don’t like the pragmatic argument against the simulation hypothesis. It demonstrates a common theme in anthropics which IMO is misleading the majority of discussions. By saying pre-simulation ancestors have impacts on how the singularity plays out therefore we ought to make decisions as if we are real pre-simulation people, it subtly shifts the objective of our decisions. Instead of the default objective of maximizing reward to ourselves, doing what’s best for us in our world, it changes the objective to achieve a certain state of the universe concerning all the worlds, real and simulations.
These two objectives do not necessarily coincide. They may even demand conflicting decisions. Yet it is very common for people to argue that self-locating uncertainty ought to be treated a certain way because it would result in rational decisions with the latter objective.
Exactly this. The problem with the current anthropic schools of thought is using this view-from-nowhere while simultaneously using the concept of “self” as a meaningful way of specifying a particular observer. It effectively jumps back and forth between the god’s eye and first-person views with arbitrary assumptions to facilitate such transitions (e.g. treating the self as the random sample of a certain process carried out from the god’s eye view). Treating the self as a given starting point and then reasoning about the world would be the way to dispel anthropic controversies.
Let’s take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn’t involve taking the AI’s perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The “right decision” is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now.
Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal
If you are born a month earlier as a preemie instead of full-term, it can be quite convincingly said you are still the same person. But if you are born a year earlier are you still the same person you are now? There are obviously going to be substantial physical differences, different sperm and egg, maybe different gender. If you are the first few human beings born, there will be few similarities between the physical person that’s you in that case and the physical person you are now. So the birth rank discussion is not about if this physical person you regard as yourself is born slightly earlier or later. But among all the people in the entire human history which one is you, i.e. from which one of those person’s perspectives do you experience the world?
The anthropic problem is not about possible worlds but instead centered worlds. Different events in anthropic problems can correspond to the exact same possible world while differing in which perspective you experience it. This circles back to point 1, and the decoupling between the first-person “I” and the physical particular person.
When you say the time of your birth is not special, you are already trying to judge it objectively. For you personally, the moment of your birth is special. And more relevantly to the DA, from a first-person perspective, the moment “now” is special.
From an objective viewpoint, discussing a specific observer or a specific moment requires some explanation, something process pointing to it. e.g. a sampling process. Otherwise, it fails to be objective by inherently focusing on someone/sometime.
From a first-person perspective, discussions based on “I” and “now” doesn’t require such an explanation. It’s inherently understandable. The future is just moments after “now”. Its prediction ought to be based on my knowledge of the present and past.
What the doomsday argument saying is, the fact “I am this person” (living now) shall be treated the same way as if someone from the objective viewpoint in 1, performs a random sampling and finds me (now). The two cases are supposed to be logically equivalent. So the two viewpoints can say the same thing. I’m saying let’s not make that assumption. And in this case, the objective viewpoint cannot say the same thing as the first-person perspective. So we can’t switch perspectives here.
I didn’t explicitly claim so. But it involves reasoning from a perspective that is impartial to any moment. This independency manifested in its core assumption: that one should regard themself to be randomly selected from all observers from its reference class from past, present and future
if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?
I’m guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the “I” as a random sample. Whereas for the non-anthropic problem, it doesn’t.
For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it’s a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I” as a random sample, or making forced transcodings.
Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to be the original or one of the newly created clones. Let’s label the original No.1 and the 99 new clones No,2 to No 100 by the chronological order of their creation. Doesn’t matter if you are old or new you can repeat this experiment. Say you take the experiment repeatedly: wake up and fall asleep and let the cloning happen each time. Everyday you wake up, you will find your own number. You do this 100 times, would you say you ought to find your number >5 about 95 times?
My argument says there is no way to say that. Doing so would require assumptions to the effect of your soul having an equal chance of embodying each physical copy, i.e. “I” am a random sample among the group.
For the non-anthropic problem, you can use the 100-people version as a justification. Because among those people the die tosser choosing you to answer a question is an actual sampling process. It is reasonable to think in this process you are treated the same way as everyone. E.g. the experiment didn’t specifically sample you only for a certain number. But there is no sampling process determining which person you are in the anthropic version. Let alone assume the process is treating you indifferently among all souls or treating each physical body indifferently in your embodiment process.
Also, people believing the Doomsday Argument objectively perform better as a group in your thought experiment is not a particularly strong case. Thirders have also constructed many thought experiments where supporters of the Doomsday Argument (halfers) would objectively perform worse as a group. But that is not my argument. I’m saying the collective performance of a group one belongs to is not a direct substitute for self-interest.
Thank you for the kind words. I understand the stance about self-locating probability. That’s the part I get most disagreements.
To me the difference is for the unfair coin, you can treat the reference class as all tosses from unfair coins that you don’t know how. Then the symmetry between Head\Tail holds, and you can say in this kind of tosses the relative frequency would be 50%. But for the self-locating probabilities in the fission problem, there really is nothing pointing to any number. That is, unless we take the average of all agents and discard the “self”. It requires taking the immaterial viewpoint and transcoding “I” by some assumption.
And remember, if you validate self-locating probability in anthropics, then the paradoxical conclusions are only a Bayesian update away.
In anthropic questions, the probability predictions about ourselves (self-locating probabilities) lead to paradoxes. At the same time, they also have no operational value such as decision-making. In a practical sense, we really shouldn’t make such probabilistic predictions. Here in this post I’m trying to explain the theoretical reason against it.
Consciousness is a property of the first-person: e.g. To me I am conscious but inherently can’t know you are. Whether or not something is conscious is asking if you think from that thing’s perspective. So there is no typical or atypical conscious being, from my perspective I am “the” conscious being, if I reason from something else’s perspective, then that thing is “the” conscious being instead.
Our usual notion of considering ourselves as a typical conscious being is because we are more used to thinking from the perspectives of things similar to us. e.g. we are more apt to think from the perspective of another person than a cat, and from the perspective of a cat than a chair. In other words, we tend to ascribe the property of consciousness to things more like ourselves, instead of the other way around: that we are typical in some sense.
The part where I know I’m conscious while not you is an assertion. It is not based on reasoning or logic but simply because it feels so. The rest are arguments which depend on said assertion.
Thought the reply was addressed to me. But nonetheless, it’s a good opportunity to delineate and inspect my own argument. So leaving the comment here.
This rewrite is still perspective dependent as it involves the concept of “now” to define who “previously come into existence”. i.e. it is different for the current generation vs people in the axial age. Whereas the Doomsday Argument uses a detached viewpoint that is time-indifferent. So the problem still remains.
I have actually written about this before. In short,
there is no rational answer to Omega’s question,to answer Omega, I can only look at the past and present situation and try to predict the future the best I could. There is no rational way to incorporate my birth rank in the answer.The question is about “me” specifically. And my goal is to maximize my chance of getting a good afterlife. In contrast, the argument you mentioned judge the answer’s merit by evaluating the collective outcome of all humans: “If everyone guesses this way then 95% of all would be correct …”. But if everyone is making the same decision, and the objective is the collective outcome of the whole group, then the individual “I” plays no part in it. To assert this answer based on the collective outcome is also the best answer for “me” requires additional assumptions. E.g. considering myself as a random sample from all humans. That is why you are right in saying “If you accept that it’s better to say yes here, then you’ve basically accepted the doomsday argument.”
In this post I have used a repeatable experiment to demonstrate this. And the top comment by benjamincosman and my subsequent replies might be relevant.
Why am I Me?
Late to the party as usual. But I appreciate considering anthropic reasoning with the boy or girl paradox in mind. In fact, I have used it in the past, mostly as an argument against Full Non-indexical Conditioning. The boy or Girl paradox highlights the importance of the sampling process: a factually correct statement alone does not justify a particular way of updating probability, at least in some cases, the process of how that statement is obtained is also essential. And to interpret the perspective-determined “I” as the outcome of what kind of sampling process is the crux of anthropic paradoxes.
I see that Gunnar_Zarncke has linked my position on this problem, much appreciated.
To my understanding, anthropic shadow refers to the absurdum logic in Leslie’s Firing Squad: “Of course I have survived the firing squad, that is the only way I can make this observation. Nothing surprising here”. Or reasonings such as “I have played the Russian roulette 1000 times, but I cannot increase my belief that there is actually no bullet in the gun because surviving is the only observation I can make”.
In the Chinese Roulette example, it is correct that the optimal strategy for the first round is also optimal for any following round. It is also correct if you decide to play for the first round then you will keep playing until kicked out i.e. no way to adjust our strategy. But that doesn’t justify there is no probability update, for each subsequent decision, while all agree to keep playing, can be different. (And they should be different) It seems absurd to say I would not be more confident to keep going after 100 empty shots.
In short, changing strategy implies there is an update, not changing strategy doesn’t imply there is no update.