The way I wrote my reply was misleading, sorry. I’m not talking about the specific numbers, I’m talking about the model itself. Remember that even in MWI, for any freak incident that allows you to avoid some disaster, there are zillions of far likelier ways of avoiding that disaster. Consider the following:
You point a gun at your head, pull the trigger, the bullet blasts through your head, but miraculously only causes damage to unimportant areas and you stay alive.
While that scenario could happen, it’s far more likely that:
You stand there with a gun to your head and your finger on the trigger, and you suddenly realize “What the hell am I doing?” and put the gun down. You remain alive.
Going back to your ‘narrowly-avoiding destruction of house’ scenario, it’s not likely that it will occur due to some freak occurence. It’s more likely that it will occur due to some mundane occurence. You’ll be out of the house buying groceries. Only half your house will be destroyed. Etc. And even if it does happen due to some freak occurrence, it will only probably be a singular event in your lifetime, giving you no meaningful way to update your beliefs.
To phrase it another way, MWI doesn’t make the improbable probable. It can’t. Otherwise we’d be seeing freak occurrences happen to us all the time. As Elezier said, it all adds up to normality. Even if Everett immortality is correct and you wind up living for a million years, you’ll look back at your life and realize that the path to your immortality was… pretty mundane, probably. You were frozen upon death and revived 100 years later into the nanotech revolution. Your consciousness merged with a computer. Etc. All stuff that we consider relatively likely here on LessWrong.
And my intuition tells me that if you actually construct a simple model this is precisely what you will find. That the probability of P(x | M), where x is the path to your immortality, and M is MWI being true, will be the same as P(x | ~M), preventing you from making any updates to your belief. I haven’t actually constructed a rigorous model here and I’d love to be proven wrong, but it’s what my intuition tells me.
The way I wrote my reply was misleading, sorry. I’m not talking about the specific numbers, I’m talking about the model itself. Remember that even in MWI, for any freak incident that allows you to avoid some disaster, there are zillions of far likelier ways of avoiding that disaster. Consider the following:
You point a gun at your head, pull the trigger, the bullet blasts through your head, but miraculously only causes damage to unimportant areas and you stay alive.
While that scenario could happen, it’s far more likely that:
You stand there with a gun to your head and your finger on the trigger, and you suddenly realize “What the hell am I doing?” and put the gun down. You remain alive.
Going back to your ‘narrowly-avoiding destruction of house’ scenario, it’s not likely that it will occur due to some freak occurence. It’s more likely that it will occur due to some mundane occurence. You’ll be out of the house buying groceries. Only half your house will be destroyed. Etc. And even if it does happen due to some freak occurrence, it will only probably be a singular event in your lifetime, giving you no meaningful way to update your beliefs.
To phrase it another way, MWI doesn’t make the improbable probable. It can’t. Otherwise we’d be seeing freak occurrences happen to us all the time. As Elezier said, it all adds up to normality. Even if Everett immortality is correct and you wind up living for a million years, you’ll look back at your life and realize that the path to your immortality was… pretty mundane, probably. You were frozen upon death and revived 100 years later into the nanotech revolution. Your consciousness merged with a computer. Etc. All stuff that we consider relatively likely here on LessWrong.
And my intuition tells me that if you actually construct a simple model this is precisely what you will find. That the probability of P(x | M), where x is the path to your immortality, and M is MWI being true, will be the same as P(x | ~M), preventing you from making any updates to your belief. I haven’t actually constructed a rigorous model here and I’d love to be proven wrong, but it’s what my intuition tells me.