It just seems almost too good to be true that I now get what plenty of genius quantum physicists still can’t.
Hmm, “too good to be true”… Does this suggest anything?
In physics, you can get absolutely clear-cut issues. Not in the sense that the issues are trivial to explain. But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports.
So why bother with an example where Bayes works the worst and is most confusing? [EDIT: What I mean is that the scientific principle works so much better in physics compared to other fields mentioned, Bayes clearly is not essential there]
Bayes-Goggles on: The simplest quantum equations that cover all known evidence don’t have a special exception for human-sized masses. There isn’t even any reason to ask that particular question. Next!
This is an actual testable prediction. Suppose such an exception is found experimentally (for example, self-decoherence due to gravitational time dilation, as proposed by Penrose, limiting the quantum effects to a few micrograms or so). Would you expect EY to retract his Bayesian-simplest model in this case, or “adjust” it to match the new data? Honestly, what do you think is likely to happen?
Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first?
Have you noticed that this is a straw-Copenhagen, and not the real thing?
This is an actual testable prediction. Suppose such an exception is found experimentally (for example, self-decoherence due to gravitational time dilation, as proposed by Penrose, limiting the quantum effects to a few micrograms or so). Would you expect EY to retract his Bayesian-simplest model in this case, or “adjust” it to match the new data? Honestly, what do you think is likely to happen?
Honestly, when the first experiment shows that we don’t see quantum effects at some larger scale when it is otherwise believed that they should show up, I expect EY to weaken, but not reverse, his view that MWI is probably correct—expecting that there is an error in the experiment. When it has been repeated, and variations have shown similar results, I expect him to drop MWI, because it now longer explains the data. I don’t have a specific prediction regarding just how many experiments it would take; this probably depends on several factors, including the nature and details of the experiments themselves.
This is from my personal model of EY, who seems relatively willing to say “Oops!” provided he has some convincing evidence he can point to; this model is derived solely from what I’ve read here, and so I don’t ascribe it hugely high confidence, but that’s my best guess.
It looks like Eliezer answers my question in this post.
Have you noticed any confusion?
Hmm, “too good to be true”… Does this suggest anything?
So why bother with an example where Bayes works the worst and is most confusing? [EDIT: What I mean is that the scientific principle works so much better in physics compared to other fields mentioned, Bayes clearly is not essential there]
This is an actual testable prediction. Suppose such an exception is found experimentally (for example, self-decoherence due to gravitational time dilation, as proposed by Penrose, limiting the quantum effects to a few micrograms or so). Would you expect EY to retract his Bayesian-simplest model in this case, or “adjust” it to match the new data? Honestly, what do you think is likely to happen?
Have you noticed that this is a straw-Copenhagen, and not the real thing?
Honestly, when the first experiment shows that we don’t see quantum effects at some larger scale when it is otherwise believed that they should show up, I expect EY to weaken, but not reverse, his view that MWI is probably correct—expecting that there is an error in the experiment. When it has been repeated, and variations have shown similar results, I expect him to drop MWI, because it now longer explains the data. I don’t have a specific prediction regarding just how many experiments it would take; this probably depends on several factors, including the nature and details of the experiments themselves.
This is from my personal model of EY, who seems relatively willing to say “Oops!” provided he has some convincing evidence he can point to; this model is derived solely from what I’ve read here, and so I don’t ascribe it hugely high confidence, but that’s my best guess.