If you observe 30 quantum heads in a row you have strong evidence in favor of MWI.
But then if I observed any string of 30 outcomes I would have strong evidence for MWI (if the coin is fair, “p” for any specific string would be 2^-30).
If you observe 30 quantum heads in a row you have strong evidence in favor of MWI.
But then if I observed any string of 30 outcomes I would have strong evidence for MWI (if the coin is fair, “p” for any specific string would be 2^-30).
First, I’m gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p.
It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p.
I don’t see a confusion of levels (whatever that means).
I still see a problem here. Substitute quantum suicide → quantum coinflip, and surviving a suicide attempt → observing the coin turning up heads.
Now we have P(there is some Y that observes coin falling heads|MWI) = 1, and P(there is some Y that observes coin falling heads|Copenhagen) = p.
So any specific outcome of a quantum event would be evidence in favor of MWI.
The probability that there exists an Everett branch in which I continue making that observation is 1. I’m not sure if jumping straight to subjective experience from that is justified:
If P(I survive|MWI) = 1, and P(I survive|Copenhagen) = p, then what is the rest of that probability mass in Copenhagen interpretation? Why is P(~(I survive)|Copenhagen) = 1-p and what does it really describe? It seems to me that calling it “I don’t make any observation” is jumping from subjective experiences back to objective. This looks like a confusion of levels.
ETA: And, of course, the problem with “anthropic probabilities” gets even harder when you consider copies and merging, simulations, Tegmark level 4, and Boltzmann brains (The Anthropic Trilemma). I’m not sure if there even is a general solution. But I strongly suspect that “you can prove MWI by quantum suicide” is an incorrect usage of probabilities.
Flip a quantum coin.
The observation that you survived 1000 good suicide attempts is much more likely under MWI than under Copenhagen.
Isn’t that like saying “Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1”?
The observation that I survive 1000 good suicide attempts has a probability of 1, but only if I condition on my being capable of making any observation at all (i.e. alive). In which case it’s the same under Copenhagen.
Sure, people in your branch might believe you
The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It’s quite improbable, but the fact that someone’s life depends on the coin shouldn’t make any difference for me—the universe doesn’t care.
Of course it also doesn’t convince me that the coin will fall heads for the 1001-st time.
(That’s only if I consider MWI and Copenhagen here. In reality after 1000 coin flips/suicides I would start to strongly suspect some alternative hypotheses. But even then it shouldn’t change my confidence of MWI relative to my confidence of Copenhagen).
I would say quantum suiciding is not “harnessing its anthropic superpowers for good”, it’s just conveniently excluding yourself from the branches where your superpowers don’t work. So it has no more positive impact on the universe than you dying has.
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
Related (somewhat): The Hero With A Thousand Chances.
That’s the problem—it shouldn’t really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
It’s not very different from surviving thousand classical Russian roulettes in a row.
ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I’m there to observe it) = 1. I think you should use the second one in appraising the MWI...
ETA2: Ok maybe not.
Quantum immortality is not observable. You surviving a quantum suicide is not evidence for MWI—no more than it is for external observers.
600 or so interlinked documents
I was thinking more of a single, 600-chapter document.
(Actually this is why I think Sequences are best read on a computer, with multiple tabs open, like TVTropes or Wikipedia—not on an e-reader. I wonder how Eliezer’s book will turn out...)
PDFs are pretty much write-only, and in my experience (with Adobe Acrobat-based devices) reflow never works very well. As long as you use a sane text-based ebook format, Calibre can handle conversion to other formats.
So I recommend converting into—if not EPUB, then maybe just a clean HTML (with all the links retained—readers that support HTML should have no problems with links between file sections).
Your “strong/weak scientific” distinction sounds like it’s more about determinism than reductionism.
According to your definitions, I’m a “strong ontological reductionist”, and “weak scientific reductionist” because I have no problem with quantum mechanics and MWI being true.
Since there is no handy toll to create polls on LW
I often see polls in comments—“upvote this comment if you choose A”, “upvote this if you choose B”, “downvote this for karma balance”. Asking for replies probably gives you less answers but more accuracy.
Isn’t some form of Twin Prisoner’s Dilemma here? Not in the payoffs, but in the fact you can assume your decision (to vote or not) is correlated to some degree with others’ decision (which it should be if you, and some of them, make that decision rationally).
I was refering to the idea that complex propositions should have lower prior probability.
Of course you don’t have to make use of it, you can use any numbers you want, but you can’t assign a prior of 0.5 to any proposition without ending up with inconsistency. To take an example that is more detached from reality—there is a natural number N you know nothing about. You can construct whatever prior probability distribution you want for it. However, you can’t just assign 0.5 for any possible property of N (for example, P(N10)=0.5).
Prior probability is what you can infer from what you know before considering a given piece of data.
If your overall information is I, and new data is D, then P(H|I) is your prior probability and P(H|DI) posterior probability for hypothesis H.
No one says you have to put exactly 0.5 as prior (this would be especially absurd for absurd-sounding hypotheses like “the lady next door is a witch, she did it”.)
“Why are you upside down, soldier?”
I’m actually a MoR fan, and I’ve found it both entertaining and (at times) enlightening.
But I think a “beginning rationalist”s time is much better spent if they’re studying philosophy, critical thinking, probability theory, etc. than on writing fanfiction (even if it would be useful in small doses).
Look at the recently posted reading list. Pick some stuff, study and discuss. If you have a good “fighting spirit” and desire to become stronger, don’t waste it on writing fanfiction...
The reason why this doesn’t work (for coins) is that (when MWI is true) A=”my observation is heads” implies B=”some Y observes heads”, but not the other way around. So P(B|A)=1, but P(A|B) = p, and after plugging that into the Bayes formula we have P(MWI|A) = P(Copenhagen|A).
Can you translate that to the quantum suicide case?