Defining the semantics and probabilities of anticipation seems to be a hard problem. You can see some past discussions of the difficulties at The Anthropic Trilemma and its back-references (posts that link to it). (I didn’t link to this earlier in case you already found a fresh approach that solved the problem. You may also want to consider not reading the previous discussions to avoid possibly falling into the same ruts.)
I have a solution that is completely underwhelming, but I can see no flaws in it, besides the complete lack of definition of which part of the mental state should be preserved to still count as you and rejection of MWI (as well as I cannot see useful insights into why we have what looks like continuous subjective experience).
You can’t consistently assign probabilities for future observations in scenarios where you expect creation of multiple instances of your mental state. All instances exist and there’s no counterfactual worlds where you end up as a mental state in a different location/time (as opposed to the one you happened to actually observe). You are here because your observations tells you that you are here, not because something intangible had moved from previous “you”(1) to the current “you” located here.
Born rule works because MWI is wrong. The collapse is objective and there’s no alternative yous.
(1) I use “you” in scare quotes to designate something beyond all information available in the mental state that presumably is unique and moves continuously (or jumps) thru time.
Let’s iterate through questions of The Anthropic Trilemma.
The Boltzmann Brain problem: no probabilities, no updates. Observing either room doesn’t tell you anything about the value of the digit of pi. It tells you that you observe the room you observe.
Winning the lottery: there’s no alternative quantum branches, so your machinations don’t change anything.
Personal future: Britney Spears observes that she has memories of Britney Spears, you observe that you have your memories. There’s no alternative scenarios if you are defined just by the information in your mental state. If you jump off the cliff, you can expect that someone with a memory of deciding to jump off the cliff (as well as all other your memories) will hit the ground and there will be no continuation of this mental state in this time and place. And your memory tells you that it will be you who will experience consequences of your decisions (whatever the underlying causes for that).
Probabilistic calculations of your future experiences work as expected, if you add “conditional on me experiencing staying here and now”.
It’s not unlike operator “do(X=x)” in Graphical Models that cuts off all other causal influences on X.
Exhibit A: flip a fair coin and move a suspended robot into a green or red room using a second coin with probabilities (99%, 1%) for heads, and (1%, 99%) for tails.
Exhibit B: flip a fair coin and create 99 copies of the robot in green rooms and 1 copy in a red room for heads, and reverse colors otherwise.
What causes the robot to see red instead of green in exhibit A? Physical processes that brought about a world where the robot sees red.
What causes a robot to see red instead of green in exhibit B? The fact that it sees red, nothing more. The physical instance of the robot who sees red in one possible world, could be the instance who sees green in another possible world, of course (physical causality surely is intact). But a robot-who-sees-red (that is one of the instances who see red) cannot be made into a robot-who-sees-green by physical manipulations. That is subjective causality of seeing red is cut off from physical causes (in the case of multiple copies of an observer). And as such cannot be used as a basis for probabilistic judgements.
I guess that if I’ll not see a resolution of the Anthropic Trilemma in the framework of MWI in about 10 years, I’ll be almost sure that MWI is wrong.
Defining the semantics and probabilities of anticipation seems to be a hard problem. You can see some past discussions of the difficulties at The Anthropic Trilemma and its back-references (posts that link to it). (I didn’t link to this earlier in case you already found a fresh approach that solved the problem. You may also want to consider not reading the previous discussions to avoid possibly falling into the same ruts.)
I have a solution that is completely underwhelming, but I can see no flaws in it, besides the complete lack of definition of which part of the mental state should be preserved to still count as you and rejection of MWI (as well as I cannot see useful insights into why we have what looks like continuous subjective experience).
You can’t consistently assign probabilities for future observations in scenarios where you expect creation of multiple instances of your mental state. All instances exist and there’s no counterfactual worlds where you end up as a mental state in a different location/time (as opposed to the one you happened to actually observe). You are here because your observations tells you that you are here, not because something intangible had moved from previous “you”(1) to the current “you” located here.
Born rule works because MWI is wrong. The collapse is objective and there’s no alternative yous.
(1) I use “you” in scare quotes to designate something beyond all information available in the mental state that presumably is unique and moves continuously (or jumps) thru time.
Let’s iterate through questions of The Anthropic Trilemma.
The Boltzmann Brain problem: no probabilities, no updates. Observing either room doesn’t tell you anything about the value of the digit of pi. It tells you that you observe the room you observe.
Winning the lottery: there’s no alternative quantum branches, so your machinations don’t change anything.
Personal future: Britney Spears observes that she has memories of Britney Spears, you observe that you have your memories. There’s no alternative scenarios if you are defined just by the information in your mental state. If you jump off the cliff, you can expect that someone with a memory of deciding to jump off the cliff (as well as all other your memories) will hit the ground and there will be no continuation of this mental state in this time and place. And your memory tells you that it will be you who will experience consequences of your decisions (whatever the underlying causes for that).
Probabilistic calculations of your future experiences work as expected, if you add “conditional on me experiencing staying here and now”.
It’s not unlike operator “do(X=x)” in Graphical Models that cuts off all other causal influences on X.
Expanding a bit on the topic.
Exhibit A: flip a fair coin and move a suspended robot into a green or red room using a second coin with probabilities (99%, 1%) for heads, and (1%, 99%) for tails.
Exhibit B: flip a fair coin and create 99 copies of the robot in green rooms and 1 copy in a red room for heads, and reverse colors otherwise.
What causes the robot to see red instead of green in exhibit A? Physical processes that brought about a world where the robot sees red.
What causes a robot to see red instead of green in exhibit B? The fact that it sees red, nothing more. The physical instance of the robot who sees red in one possible world, could be the instance who sees green in another possible world, of course (physical causality surely is intact). But a robot-who-sees-red (that is one of the instances who see red) cannot be made into a robot-who-sees-green by physical manipulations. That is subjective causality of seeing red is cut off from physical causes (in the case of multiple copies of an observer). And as such cannot be used as a basis for probabilistic judgements.
I guess that if I’ll not see a resolution of the Anthropic Trilemma in the framework of MWI in about 10 years, I’ll be almost sure that MWI is wrong.