The remaining uncertainty in QM is about which slower-than-light, differentiable, configuration-space-local, CPT-symmetric, deterministic, linear, unitary physics will explain the Born probabilities
Why on Earth must real physics play nice with the conceptual way in which it was mathematized at the current level of detail and areas of applicability? It can easily be completely different, with for example “differentiable” or “linear” ceasing to make sense for a new framework. Math is prone to live on patterns, ignoring the nature of underlying detail.
One of these elegances could be wrong. But all of them? In exactly the right way to restore a single world? It’s not worth thinking about, at this stage.
From a purely theoretic or philosophical point of view, I’d agree.
However, physical theories are mostly used to make predictions.
Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.
Now you have an interesting situation: You have two formalisms: one is butt-ugly but usable, the other one is nice and general, but not very helpful. Additionally, the two formalisms are mostly equivalent mathematically, at least as long as it comes to making verifiable predictions.
Additionally there are these pesky probabilities, that the nice formalism may account for automatically, but it’s still unclear. These probabilities are essential to every practical use of the theory. So from a practical point of view, they are not just a nuance, they are essential.
If you assess this situation with a purely positivist mind-set: you could ask: “What additional benefits does the elegant formalism give me besides being elegant?”
Now, I don’t want to say that MWI does not have a clear and definite theoretical edge, but it would be quite hypocritical to throw out the usable formalism as long as it is even unclear how to make the new one at least as predictive as the old one.
Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.
How does using a state reduction imply thinking about a single-world theory, rather than just a restriction to one of the branches to see what happens there?
You do the exact same calculations with either formalism.
Try to formally derive any quantitative prediction based on both formalisms.
The problem with MWI formalism that there is one small missing piece and that one stupid little piece seems to be crucial to make any quantitative predictions.
The problem here is a bit of hypocrisy: Theoretically, you prefer MWI, but whenever you have to make a calculation, you go to the closet and use old-fashioned ad hoc state reduction.
Because of decoherence and the linearity of the Schrödinger equation, you can get a very good approximation to the behavior of the wavefunction over a certain set of configurations by ‘starting it off’ as a very localized mass around some configuration (if you’re a physicist, you just say “what the hell, let’s use a Dirac delta and make our calculations easier”). This nifty approximation trick, no more and no less, is the operation of ‘state reduction’. If using such a trick implies that all physicists are closet single-world believers, then it seems astronomers must secretly believe that planets are point masses.
I don’t really see that doing a trick like that really buys you the Born rule. Any reference to back your statement?
Douglas is right: the crux of matter seems to be the description of the measurement process. There have been recent attempts to resolve that, but so far they are not very convincing.
Douglas is right: the crux of matter seems to be the description of the measurement process.
The trick, as described in On Being Decoherent, is that if you have a sensor whose action is entropically irreversible, then the parts of the wavefunction supported on configurations with different sensor readings will no longer interfere with each other. The upshot of this is that, as the result of a perfectly sensible process within the same physics, you can treat any sensitive detector (including your brain) as if it were a black-box decoherence generator. This results in doing the same calculations you’d do from a collapse interpretation of measurement, and turns the “measurement problem” into a very good approximation technique (to a world where everything obeys the same fundamental physics) rather than a special additional physics process.
That explains the decoherence as a phenomenon (which I never doubted), but does not explain the subjectively perceived probability values as a function of the wave function.
Ah. On that front, as a mathematician, I’m more than willing to extend my intuitions about discrete numbers of copies to intuitions about continuous measures over sets of configurations. I think it’s a bit misleading, intuition-wise, to think about “what I will experience in the future”, given that my only evidence is in terms of the state of my current brain and its reflection of past states of the universe.
That is, I believe that I am a “typical” instance of someone who was me 1 year prior, and in that year I’ve observed events with frequencies matching the Born statistics. To explain this, it’s necessary and sufficient for the universe to assign measure to configurations in the way the Schrödinger equation does (neglecting the fact that some different equation is necessary in order to incorporate gravity), resulting in a “typical” observer recalling a history which corresponds to the Born probabilities.
The only sense in which the Born probabilities present me with a quandary is that the universe prefers the L^2 norm to the L^1 norm; but given the Schrödinger equation, that seems natural enough for mathematical reasons.
I think we start to walk in circles. What simply seem to declare your faith(?) that the universe is somehow forced to use the specific quantitative rule while at the same time admitting that you find it strange that it is one norm and not the another (also ad hoc) one.
I don’t disagree with your general sentiment, but it would be far-fetched to say the problem is solved. It is not (to my best knowledge) and no declaration of faith changes that until a precise mathematical model is presented giving gap-free, quantitative derivations of the experimental results.
However, I would be delighted to chat with you a bit IRL if you still happen to live in Berkeley. I am also a mathematician living in Berkely and I guess it could be fun to share some thoughts over a beer or at a cafe. Drop me a PM, if you are interested.
I think the most charitable interpretation of CS is that if you want to make an actual observation in many worlds, you have to model your measurement apparatus, while if you believe in collapse, then measurement is a primitive of the theory.
Maybe I misunderstand you and this is a non sequitur, but the point is to apply decoherence after the measurement, not (just) before.
Many-worlds are there at the level of quantum mechanics, and there is the single world at the level of classical mechanics, both views correct in their respective frameworks for describing reality. The world-counting is how human intuitions read math, not obviously something inherent in reality (unless there is a better understanding of what “inherent in reality” should mean). What picture is right for a deeper level can be completely different once again.
Another, more important question, is how morally relevant are these conceptions of reality, but I don’t know in what way to trust my intuition about morality of concepts it’s using for interpreting math. So far, MWI looks to me morally indistinguishable from epistemic uncertainty, and so many-worlds of QM are no more real than single-world of classical mechanics. Many-worldness of QM might well be more due to the properties of math rather than “character of reality”, whatever that should mean.
The fact that quantum mechanics is deeper in physics places it further away from human experience and from human morality, and so makes it less obviously adequately evaluated intuitively. The measure of reality lies in human preference, not in the turtles of physics. Exploration of physics starts from human plans, and the fact that humans are made of the stuff doesn’t give it more status than a distant star—it’s just a substrate.
If MWI is simpler than nonMWI, then by Solomonoffish reasoning it’s more likely that TOE reduces to observed reality via MWI than that it reduces to observed reality via nonMWI, correct? I agree all these properties that Eliezer mentions are helpful only as a proxy for simplicity, and I’m not sure they’re all independent arguments for MWI’s relative simplicity, but it seems extremely hard to argue that MWI isn’t in fact simpler given all these properties.
I don’t assume the reality has a bottom, but in human realm it has a beginning, and that’s human experience. What we know we learn from experiments, observe more and more about the bigger system, and this process is probably not going to end, even in principle. What’s to judge this process rather than us?
If, for example, in prior/utility framework, prior is just one half of preference, that alone demonstrates dependence of notion of “degree of reality” for concepts on human morality, in its technical sense. While I’m not convinced that prior/utility is the right framework for human preference, the case is in point.
P.S. Just to be sure, I’m not arguing for one-world QM, I’m comparing many-world QM to one-world classical mechanics.
If reality is finitely complex, how does it get to have no bottom?
P.S. Just to be sure, I’m not arguing for one-world QM, I’m comparing many-world QM to one-world classical mechanics.
I don’t understand. Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we’re forced to compare these different kinds of QM.
Vladimir_Nesov’s post is regarding where we should look for morally-relevant conceptions of reality. He is advocating building out our morality starting from human-scale physics, which is well-approximated by one-world classical mechanics.
If reality is finitely complex, how does it get to have no bottom?
What does it mean for reality to be finitely complex? At some point you not just need to become able to predict everything, you need to become sure in your predictions, and that I consider an incorrect thing to do at any point. Therefore, complexity of reality, as people perceive it is never going to run out (I’m not sure, but it looks this way).
Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we’re forced to compare these different kinds of QM.
Quantum mechanics is valid predictive math. The extent to which interpretation of this math in terms of human intuitions about worlds is adequate is tricky. For example, it’s hard to intuitively tell a difference between another person in the same world and another person described by a different MWI world: should these patterns be of equal moral worth? How should we know, how can we trust intuition on this, without technical understanding of morality? Intuitions break down even for our almost-ancestral-environment situations.
Why on Earth must real physics play nice with the conceptual way in which it was mathematized at the current level of detail and areas of applicability? It can easily be completely different, with for example “differentiable” or “linear” ceasing to make sense for a new framework. Math is prone to live on patterns, ignoring the nature of underlying detail.
One of these elegances could be wrong. But all of them? In exactly the right way to restore a single world? It’s not worth thinking about, at this stage.
From a purely theoretic or philosophical point of view, I’d agree.
However, physical theories are mostly used to make predictions.
Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.
Now you have an interesting situation: You have two formalisms: one is butt-ugly but usable, the other one is nice and general, but not very helpful. Additionally, the two formalisms are mostly equivalent mathematically, at least as long as it comes to making verifiable predictions.
Additionally there are these pesky probabilities, that the nice formalism may account for automatically, but it’s still unclear. These probabilities are essential to every practical use of the theory. So from a practical point of view, they are not just a nuance, they are essential.
If you assess this situation with a purely positivist mind-set: you could ask: “What additional benefits does the elegant formalism give me besides being elegant?”
Now, I don’t want to say that MWI does not have a clear and definite theoretical edge, but it would be quite hypocritical to throw out the usable formalism as long as it is even unclear how to make the new one at least as predictive as the old one.
How does using a state reduction imply thinking about a single-world theory, rather than just a restriction to one of the branches to see what happens there?
You do the exact same calculations with either formalism.
Try to formally derive any quantitative prediction based on both formalisms.
The problem with MWI formalism that there is one small missing piece and that one stupid little piece seems to be crucial to make any quantitative predictions.
The problem here is a bit of hypocrisy: Theoretically, you prefer MWI, but whenever you have to make a calculation, you go to the closet and use old-fashioned ad hoc state reduction.
Because of decoherence and the linearity of the Schrödinger equation, you can get a very good approximation to the behavior of the wavefunction over a certain set of configurations by ‘starting it off’ as a very localized mass around some configuration (if you’re a physicist, you just say “what the hell, let’s use a Dirac delta and make our calculations easier”). This nifty approximation trick, no more and no less, is the operation of ‘state reduction’. If using such a trick implies that all physicists are closet single-world believers, then it seems astronomers must secretly believe that planets are point masses.
I don’t really see that doing a trick like that really buys you the Born rule. Any reference to back your statement?
Douglas is right: the crux of matter seems to be the description of the measurement process. There have been recent attempts to resolve that, but so far they are not very convincing.
Forgot about this post for a while; my apologies.
The trick, as described in On Being Decoherent, is that if you have a sensor whose action is entropically irreversible, then the parts of the wavefunction supported on configurations with different sensor readings will no longer interfere with each other. The upshot of this is that, as the result of a perfectly sensible process within the same physics, you can treat any sensitive detector (including your brain) as if it were a black-box decoherence generator. This results in doing the same calculations you’d do from a collapse interpretation of measurement, and turns the “measurement problem” into a very good approximation technique (to a world where everything obeys the same fundamental physics) rather than a special additional physics process.
That explains the decoherence as a phenomenon (which I never doubted), but does not explain the subjectively perceived probability values as a function of the wave function.
Ah. On that front, as a mathematician, I’m more than willing to extend my intuitions about discrete numbers of copies to intuitions about continuous measures over sets of configurations. I think it’s a bit misleading, intuition-wise, to think about “what I will experience in the future”, given that my only evidence is in terms of the state of my current brain and its reflection of past states of the universe.
That is, I believe that I am a “typical” instance of someone who was me 1 year prior, and in that year I’ve observed events with frequencies matching the Born statistics. To explain this, it’s necessary and sufficient for the universe to assign measure to configurations in the way the Schrödinger equation does (neglecting the fact that some different equation is necessary in order to incorporate gravity), resulting in a “typical” observer recalling a history which corresponds to the Born probabilities.
The only sense in which the Born probabilities present me with a quandary is that the universe prefers the L^2 norm to the L^1 norm; but given the Schrödinger equation, that seems natural enough for mathematical reasons.
I think we start to walk in circles. What simply seem to declare your faith(?) that the universe is somehow forced to use the specific quantitative rule while at the same time admitting that you find it strange that it is one norm and not the another (also ad hoc) one.
I don’t see how this contradicts the grand-grand-...parent post http://lesswrong.com/lw/19s/why_manyworlds_is_not_the_rationally_favored/151w .
I don’t disagree with your general sentiment, but it would be far-fetched to say the problem is solved. It is not (to my best knowledge) and no declaration of faith changes that until a precise mathematical model is presented giving gap-free, quantitative derivations of the experimental results.
However, I would be delighted to chat with you a bit IRL if you still happen to live in Berkeley. I am also a mathematician living in Berkely and I guess it could be fun to share some thoughts over a beer or at a cafe. Drop me a PM, if you are interested.
I think the most charitable interpretation of CS is that if you want to make an actual observation in many worlds, you have to model your measurement apparatus, while if you believe in collapse, then measurement is a primitive of the theory.
Maybe I misunderstand you and this is a non sequitur, but the point is to apply decoherence after the measurement, not (just) before.
I take it you don’t think much of Bohmian mechanics, then. ;)
Many-worlds are there at the level of quantum mechanics, and there is the single world at the level of classical mechanics, both views correct in their respective frameworks for describing reality. The world-counting is how human intuitions read math, not obviously something inherent in reality (unless there is a better understanding of what “inherent in reality” should mean). What picture is right for a deeper level can be completely different once again.
Another, more important question, is how morally relevant are these conceptions of reality, but I don’t know in what way to trust my intuition about morality of concepts it’s using for interpreting math. So far, MWI looks to me morally indistinguishable from epistemic uncertainty, and so many-worlds of QM are no more real than single-world of classical mechanics. Many-worldness of QM might well be more due to the properties of math rather than “character of reality”, whatever that should mean.
The fact that quantum mechanics is deeper in physics places it further away from human experience and from human morality, and so makes it less obviously adequately evaluated intuitively. The measure of reality lies in human preference, not in the turtles of physics. Exploration of physics starts from human plans, and the fact that humans are made of the stuff doesn’t give it more status than a distant star—it’s just a substrate.
If MWI is simpler than nonMWI, then by Solomonoffish reasoning it’s more likely that TOE reduces to observed reality via MWI than that it reduces to observed reality via nonMWI, correct? I agree all these properties that Eliezer mentions are helpful only as a proxy for simplicity, and I’m not sure they’re all independent arguments for MWI’s relative simplicity, but it seems extremely hard to argue that MWI isn’t in fact simpler given all these properties.
I don’t assume the reality has a bottom, but in human realm it has a beginning, and that’s human experience. What we know we learn from experiments, observe more and more about the bigger system, and this process is probably not going to end, even in principle. What’s to judge this process rather than us?
If, for example, in prior/utility framework, prior is just one half of preference, that alone demonstrates dependence of notion of “degree of reality” for concepts on human morality, in its technical sense. While I’m not convinced that prior/utility is the right framework for human preference, the case is in point.
P.S. Just to be sure, I’m not arguing for one-world QM, I’m comparing many-world QM to one-world classical mechanics.
If reality is finitely complex, how does it get to have no bottom?
I don’t understand. Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we’re forced to compare these different kinds of QM.
Vladimir_Nesov’s post is regarding where we should look for morally-relevant conceptions of reality. He is advocating building out our morality starting from human-scale physics, which is well-approximated by one-world classical mechanics.
What does it mean for reality to be finitely complex? At some point you not just need to become able to predict everything, you need to become sure in your predictions, and that I consider an incorrect thing to do at any point. Therefore, complexity of reality, as people perceive it is never going to run out (I’m not sure, but it looks this way).
Quantum mechanics is valid predictive math. The extent to which interpretation of this math in terms of human intuitions about worlds is adequate is tricky. For example, it’s hard to intuitively tell a difference between another person in the same world and another person described by a different MWI world: should these patterns be of equal moral worth? How should we know, how can we trust intuition on this, without technical understanding of morality? Intuitions break down even for our almost-ancestral-environment situations.