Try to formally derive any quantitative prediction based on both formalisms.
The problem with MWI formalism that there is one small missing piece and that one stupid little piece seems to be crucial to make any quantitative predictions.
The problem here is a bit of hypocrisy: Theoretically, you prefer MWI, but whenever you have to make a calculation, you go to the closet and use old-fashioned ad hoc state reduction.
Because of decoherence and the linearity of the Schrödinger equation, you can get a very good approximation to the behavior of the wavefunction over a certain set of configurations by ‘starting it off’ as a very localized mass around some configuration (if you’re a physicist, you just say “what the hell, let’s use a Dirac delta and make our calculations easier”). This nifty approximation trick, no more and no less, is the operation of ‘state reduction’. If using such a trick implies that all physicists are closet single-world believers, then it seems astronomers must secretly believe that planets are point masses.
I don’t really see that doing a trick like that really buys you the Born rule. Any reference to back your statement?
Douglas is right: the crux of matter seems to be the description of the measurement process. There have been recent attempts to resolve that, but so far they are not very convincing.
Douglas is right: the crux of matter seems to be the description of the measurement process.
The trick, as described in On Being Decoherent, is that if you have a sensor whose action is entropically irreversible, then the parts of the wavefunction supported on configurations with different sensor readings will no longer interfere with each other. The upshot of this is that, as the result of a perfectly sensible process within the same physics, you can treat any sensitive detector (including your brain) as if it were a black-box decoherence generator. This results in doing the same calculations you’d do from a collapse interpretation of measurement, and turns the “measurement problem” into a very good approximation technique (to a world where everything obeys the same fundamental physics) rather than a special additional physics process.
That explains the decoherence as a phenomenon (which I never doubted), but does not explain the subjectively perceived probability values as a function of the wave function.
Ah. On that front, as a mathematician, I’m more than willing to extend my intuitions about discrete numbers of copies to intuitions about continuous measures over sets of configurations. I think it’s a bit misleading, intuition-wise, to think about “what I will experience in the future”, given that my only evidence is in terms of the state of my current brain and its reflection of past states of the universe.
That is, I believe that I am a “typical” instance of someone who was me 1 year prior, and in that year I’ve observed events with frequencies matching the Born statistics. To explain this, it’s necessary and sufficient for the universe to assign measure to configurations in the way the Schrödinger equation does (neglecting the fact that some different equation is necessary in order to incorporate gravity), resulting in a “typical” observer recalling a history which corresponds to the Born probabilities.
The only sense in which the Born probabilities present me with a quandary is that the universe prefers the L^2 norm to the L^1 norm; but given the Schrödinger equation, that seems natural enough for mathematical reasons.
I think we start to walk in circles. What simply seem to declare your faith(?) that the universe is somehow forced to use the specific quantitative rule while at the same time admitting that you find it strange that it is one norm and not the another (also ad hoc) one.
I don’t disagree with your general sentiment, but it would be far-fetched to say the problem is solved. It is not (to my best knowledge) and no declaration of faith changes that until a precise mathematical model is presented giving gap-free, quantitative derivations of the experimental results.
However, I would be delighted to chat with you a bit IRL if you still happen to live in Berkeley. I am also a mathematician living in Berkely and I guess it could be fun to share some thoughts over a beer or at a cafe. Drop me a PM, if you are interested.
I think the most charitable interpretation of CS is that if you want to make an actual observation in many worlds, you have to model your measurement apparatus, while if you believe in collapse, then measurement is a primitive of the theory.
Maybe I misunderstand you and this is a non sequitur, but the point is to apply decoherence after the measurement, not (just) before.
Try to formally derive any quantitative prediction based on both formalisms.
The problem with MWI formalism that there is one small missing piece and that one stupid little piece seems to be crucial to make any quantitative predictions.
The problem here is a bit of hypocrisy: Theoretically, you prefer MWI, but whenever you have to make a calculation, you go to the closet and use old-fashioned ad hoc state reduction.
Because of decoherence and the linearity of the Schrödinger equation, you can get a very good approximation to the behavior of the wavefunction over a certain set of configurations by ‘starting it off’ as a very localized mass around some configuration (if you’re a physicist, you just say “what the hell, let’s use a Dirac delta and make our calculations easier”). This nifty approximation trick, no more and no less, is the operation of ‘state reduction’. If using such a trick implies that all physicists are closet single-world believers, then it seems astronomers must secretly believe that planets are point masses.
I don’t really see that doing a trick like that really buys you the Born rule. Any reference to back your statement?
Douglas is right: the crux of matter seems to be the description of the measurement process. There have been recent attempts to resolve that, but so far they are not very convincing.
Forgot about this post for a while; my apologies.
The trick, as described in On Being Decoherent, is that if you have a sensor whose action is entropically irreversible, then the parts of the wavefunction supported on configurations with different sensor readings will no longer interfere with each other. The upshot of this is that, as the result of a perfectly sensible process within the same physics, you can treat any sensitive detector (including your brain) as if it were a black-box decoherence generator. This results in doing the same calculations you’d do from a collapse interpretation of measurement, and turns the “measurement problem” into a very good approximation technique (to a world where everything obeys the same fundamental physics) rather than a special additional physics process.
That explains the decoherence as a phenomenon (which I never doubted), but does not explain the subjectively perceived probability values as a function of the wave function.
Ah. On that front, as a mathematician, I’m more than willing to extend my intuitions about discrete numbers of copies to intuitions about continuous measures over sets of configurations. I think it’s a bit misleading, intuition-wise, to think about “what I will experience in the future”, given that my only evidence is in terms of the state of my current brain and its reflection of past states of the universe.
That is, I believe that I am a “typical” instance of someone who was me 1 year prior, and in that year I’ve observed events with frequencies matching the Born statistics. To explain this, it’s necessary and sufficient for the universe to assign measure to configurations in the way the Schrödinger equation does (neglecting the fact that some different equation is necessary in order to incorporate gravity), resulting in a “typical” observer recalling a history which corresponds to the Born probabilities.
The only sense in which the Born probabilities present me with a quandary is that the universe prefers the L^2 norm to the L^1 norm; but given the Schrödinger equation, that seems natural enough for mathematical reasons.
I think we start to walk in circles. What simply seem to declare your faith(?) that the universe is somehow forced to use the specific quantitative rule while at the same time admitting that you find it strange that it is one norm and not the another (also ad hoc) one.
I don’t see how this contradicts the grand-grand-...parent post http://lesswrong.com/lw/19s/why_manyworlds_is_not_the_rationally_favored/151w .
I don’t disagree with your general sentiment, but it would be far-fetched to say the problem is solved. It is not (to my best knowledge) and no declaration of faith changes that until a precise mathematical model is presented giving gap-free, quantitative derivations of the experimental results.
However, I would be delighted to chat with you a bit IRL if you still happen to live in Berkeley. I am also a mathematician living in Berkely and I guess it could be fun to share some thoughts over a beer or at a cafe. Drop me a PM, if you are interested.
I think the most charitable interpretation of CS is that if you want to make an actual observation in many worlds, you have to model your measurement apparatus, while if you believe in collapse, then measurement is a primitive of the theory.
Maybe I misunderstand you and this is a non sequitur, but the point is to apply decoherence after the measurement, not (just) before.