I’m not parroting Eliezer, I’m a physics grad student who happens to have agreed with the many worlds interpretation even before I found Less Wrong. Nor am I denigrating one model because I don’t like it. Even if the models were on equal footing with regards to experimental evidence, MWI has fewer postulates and is therefore the more likely theory. You seem to agree that MWI has fewer postulates, so given a choice between the two, why favor the collapse interpretation, or refuse to give any interpretation at all?
EDIT: I hope I don’t sound flippant with my response, though your ad hominem was unwarranted. I do understand that my arguments are basically the same as Eliezers, since they’re the low-hanging fruit on the tree of pro-many-worlds arguments, so I will give you that I do sound like an Eliezer-parrot even if I don’t intend to. With that, think you’re right that Less Wrong’s arguments are sub par, and I’m going to try to write a few posts that present all the sides as equally as I can manage.
Apologies, you did not sound to me like a physics grad student when you said “CI should be classified as a fringe position and everyone should provisionally accept MWI”. (And “ad hominem” does not mean an insult, it means “I reject your logic based on who you are, rather than on what you say”.)
You seem to agree that MWI has fewer postulates, so given a choice between the two, why favor the collapse interpretation, or refuse to give any interpretation at all?
This is the crux of the issue. What do you gain by favoring one over another? My original point was that there is little to be gained
because you still have no definitive experiment that would convince your opponent. And so the argument becomes philosophical rather than physical, as it cannot be resolved by the scientific method.
I understand the Deutsch’s logic “but if only we had a reversible and “conscious” quantum computation, we could test it”, which assumes that we know what conscious means and whether it can be reversible, tying one difficult issue to another just as deep. Until the latter is resolved, the former is not really testable.
This is the crux of the issue. What do you gain by favoring one over another? My original point was that there is little to be gained.
I suppose one would only gain a simpler theory, since both theories predict the same thing. So from the perspective of neatness, I’d prefer to have one less postulate. From the persepctive of actually solving problems, none of this matters.
In fact, none of my professors throughout college ever brought up the topic of interpretation, except to say that it was complicated. I suppose that’s why I don’t sound like a grad student to you; though I can solve problems very well, everything I know about the interpretations of the theory I have gleaned from textbooks and the internet; I have yet to look at specific papers, or study it in depth.
Part of the reason I want to write more on this is to have an excuse to force myself to learn/study more on the issue; it is still possible to change my mind, after all.
Though on the issue of reversibility; if we accept the mind is capable of being simulated by a computer, and we had a computer that was made of Toffoli gates (or the quantum version, if such a thing exists), would that mind not then be reversible?
And thanks for pointing out my error on the use of ad hominem; I always forget that.
I suppose one would only gain a simpler theory, since both theories predict the same thing. So from the perspective of neatness, I’d prefer to have one less postulate. From the perspective of actually solving problems, none of this matters.
Right.
In fact, none of my professors throughout college ever brought up the topic of interpretation, except to say that it was complicated. I suppose that’s why I don’t sound like a grad student to you; though I can solve problems very well, everything I know about the interpretations of the theory I have gleaned from textbooks and the internet; I have yet to look at specific papers, or study it in depth.
I was in the same boat, having gone through all the undergrad and grad quantum courses without learning anything about ontology, except for the general unease with the Born postulate. This is a common situation. Only Quantum Information courses are sometimes different. And philosophy of physics, but I don’t take those seriously.
Part of the reason I want to write more on this is to have an excuse to force myself to learn/study more on the issue; it is still possible to change my mind, after all.
By all means, just make sure you don’t have a “favored interpretation” when you start, it will bias you without you noticing.
Though on the issue of reversibility; if we accept the mind is capable of being simulated by a computer, and we had a computer that was made of Toffoli gates (or the quantum version, if such a thing exists), would that mind not then be reversible?
There are arguments that dissipation and irreversibility is essential for consciousness. Whether they are any good will depend on what consciousness is. At this point we have very little to go on beyond “hopefully it can be simulated some day”.
If I explain a phenomenon using 10 postulates (of some fixed length) and you explain it using 10,000,000,000, your theory gets demoted (even if we don’t know anything else about the two theories) because it has more ways to go wrong. If you accept that this is true in a big way in the extreme case, you should accept that it is true in a small way in more mild cases (e.g., 10 postulates vs. 20, or 10 postulates vs. 100).
I like to think of it as an extension of the conjunction fallacy; the probability of A and B being true can’t be higher than the probability of either A or B; adding new conditions can only make the probability stay the same or go down. So the probability of a theory once it has an extra postulate, must be equal to or lower than the probability of the same theory with fewer postulates. Of course, that assumes the independence of the postulates.
The probability of the postulates all being true goes down as you add postulates. The probability of the theory being correct given the postulates may go up.
This assumes the postulates are interdependent such that the theory may be true with all postulates, but false with all postulates save one. In this case, the theories are the same except for the collapse postulate, which may or may not have any real-world consequences, depending on whether you believe decoherence accounts for the appearance of collapse all by itself.
Not only it assumes independence, it also assumes that the two competing theories have exactly the same postulates except for a single extra one. That is typically not how things work in real life.
I like to think of it as an extension of the conjunction fallacy; the probability of A and B being true can’t be higher than the probability of either A or B; adding new conditions can only make the probability stay the same or go down.
Among theories that explain the evidence equally well, those with fewer postulates are more probable. This is a strict conclusion of information theory. Further, we can trade explanatory power for theoretical complexity in a well-defined way: minimum message length. Occam’s Razor is not just “a convenient heuristic.”
Heh. I think you’re trying to generalize a narrow result way too much. Especially when we are not talking about compression ratios, but things like “explanatory power” which is quite different from getting to the shortest bit string.
Let’s take a real example which was discussed on the LW recently: the heliocentrism debates in Renaissance Europe, for example between Copernicus and Kepler, pre-Galileo (see e.g. here). Show me how the MML theory is relevant to this choice between two competing theories.
Kepler’s heliocentric theory is a direct result of Newtonian mechanics and gravitation, equations which can be encoded very simply and require few parameters to achieve accurate predictions for the planetary orbits. Copernicus’ theory improved over Ptolemy’s geocentric theory by using the same basic model for all the planetary orbits (instead of a different model for each) and naturally handling the appearance of retrograde motion. However, it still required numerous epicycles in order to make accurate predictions, because Copernicus constrained the theory to use only perfect circular motion. Allowing elliptical motion would have made the basic model slightly more complex, but would have drastically reduced the amount of necessary parameters and corrections. That’s exactly the tradeoff described by MML.
The dozens of epicycles aren’t on a par with Kepler’s laws. “Planets move in circles plus epicycles” is what you have to compare with Kepler’s laws. “Such-and-such a planet moves in such-and-such a circle plus such-and-such epicycles” is parallel not to Kepler’s laws themselves but to “Such-and-such a planet moves in such-and-such an ellipse, apart from such-and-such further corrections”. If some epicycles are needed in the first case, but no corrections in the second, then Kepler wins. If you need to add corrections to the Keplerian model, either might come out ahead.
(Why would you need corrections in the Keplerian model? Inaccurate observations. Gravitational influences of one planet on another—this is how Neptune was discovered.)
I have heard that Copernican astronomy (circles centred on the sun, plus corrections) ended up needing more epicycles than Ptolemaic (circles centred on the earth, plus corrections) for reasons I don’t know. I think Kepler’s system needed much less correction, but don’t know the details.
I’m not parroting Eliezer, I’m a physics grad student who happens to have agreed with the many worlds interpretation even before I found Less Wrong. Nor am I denigrating one model because I don’t like it. Even if the models were on equal footing with regards to experimental evidence, MWI has fewer postulates and is therefore the more likely theory. You seem to agree that MWI has fewer postulates, so given a choice between the two, why favor the collapse interpretation, or refuse to give any interpretation at all?
EDIT: I hope I don’t sound flippant with my response, though your ad hominem was unwarranted. I do understand that my arguments are basically the same as Eliezers, since they’re the low-hanging fruit on the tree of pro-many-worlds arguments, so I will give you that I do sound like an Eliezer-parrot even if I don’t intend to. With that, think you’re right that Less Wrong’s arguments are sub par, and I’m going to try to write a few posts that present all the sides as equally as I can manage.
Apologies, you did not sound to me like a physics grad student when you said “CI should be classified as a fringe position and everyone should provisionally accept MWI”. (And “ad hominem” does not mean an insult, it means “I reject your logic based on who you are, rather than on what you say”.)
This is the crux of the issue. What do you gain by favoring one over another? My original point was that there is little to be gained
I understand the Deutsch’s logic “but if only we had a reversible and “conscious” quantum computation, we could test it”, which assumes that we know what conscious means and whether it can be reversible, tying one difficult issue to another just as deep. Until the latter is resolved, the former is not really testable.
I suppose one would only gain a simpler theory, since both theories predict the same thing. So from the perspective of neatness, I’d prefer to have one less postulate. From the persepctive of actually solving problems, none of this matters.
In fact, none of my professors throughout college ever brought up the topic of interpretation, except to say that it was complicated. I suppose that’s why I don’t sound like a grad student to you; though I can solve problems very well, everything I know about the interpretations of the theory I have gleaned from textbooks and the internet; I have yet to look at specific papers, or study it in depth.
Part of the reason I want to write more on this is to have an excuse to force myself to learn/study more on the issue; it is still possible to change my mind, after all.
Though on the issue of reversibility; if we accept the mind is capable of being simulated by a computer, and we had a computer that was made of Toffoli gates (or the quantum version, if such a thing exists), would that mind not then be reversible?
And thanks for pointing out my error on the use of ad hominem; I always forget that.
Right.
I was in the same boat, having gone through all the undergrad and grad quantum courses without learning anything about ontology, except for the general unease with the Born postulate. This is a common situation. Only Quantum Information courses are sometimes different. And philosophy of physics, but I don’t take those seriously.
By all means, just make sure you don’t have a “favored interpretation” when you start, it will bias you without you noticing.
There are arguments that dissipation and irreversibility is essential for consciousness. Whether they are any good will depend on what consciousness is. At this point we have very little to go on beyond “hopefully it can be simulated some day”.
The results of that are kinda noticeable,
I don’t see why “fewer postulates” makes something “more likely”. Occam’s Razor is not a natural law, it’s a convenient heuristic for human minds.
“For every complex problem there is an answer that is clear, simple, and wrong.”—H. L. Mencken
If I explain a phenomenon using 10 postulates (of some fixed length) and you explain it using 10,000,000,000, your theory gets demoted (even if we don’t know anything else about the two theories) because it has more ways to go wrong. If you accept that this is true in a big way in the extreme case, you should accept that it is true in a small way in more mild cases (e.g., 10 postulates vs. 20, or 10 postulates vs. 100).
I like to think of it as an extension of the conjunction fallacy; the probability of A and B being true can’t be higher than the probability of either A or B; adding new conditions can only make the probability stay the same or go down. So the probability of a theory once it has an extra postulate, must be equal to or lower than the probability of the same theory with fewer postulates. Of course, that assumes the independence of the postulates.
The probability of the postulates all being true goes down as you add postulates. The probability of the theory being correct given the postulates may go up.
This assumes the postulates are interdependent such that the theory may be true with all postulates, but false with all postulates save one. In this case, the theories are the same except for the collapse postulate, which may or may not have any real-world consequences, depending on whether you believe decoherence accounts for the appearance of collapse all by itself.
Not only it assumes independence, it also assumes that the two competing theories have exactly the same postulates except for a single extra one. That is typically not how things work in real life.
Er, no it doesn’t. Where are you getting this?
From here:
Among theories that explain the evidence equally well, those with fewer postulates are more probable. This is a strict conclusion of information theory. Further, we can trade explanatory power for theoretical complexity in a well-defined way: minimum message length. Occam’s Razor is not just “a convenient heuristic.”
Could you demonstrate this, please?
The linked Wikipedia page provides a succinct derivation from Shannon and Bayes’ Theorem.
Heh. I think you’re trying to generalize a narrow result way too much. Especially when we are not talking about compression ratios, but things like “explanatory power” which is quite different from getting to the shortest bit string.
Let’s take a real example which was discussed on the LW recently: the heliocentrism debates in Renaissance Europe, for example between Copernicus and Kepler, pre-Galileo (see e.g. here). Show me how the MML theory is relevant to this choice between two competing theories.
Kepler’s heliocentric theory is a direct result of Newtonian mechanics and gravitation, equations which can be encoded very simply and require few parameters to achieve accurate predictions for the planetary orbits. Copernicus’ theory improved over Ptolemy’s geocentric theory by using the same basic model for all the planetary orbits (instead of a different model for each) and naturally handling the appearance of retrograde motion. However, it still required numerous epicycles in order to make accurate predictions, because Copernicus constrained the theory to use only perfect circular motion. Allowing elliptical motion would have made the basic model slightly more complex, but would have drastically reduced the amount of necessary parameters and corrections. That’s exactly the tradeoff described by MML.
Not for Kepler who lived about a century before Newton.
My question was about the Copernicus—Kepler debates and Newtonian mechanics were quite unknown at that point.
Even Kepler’s theory expressed as his three separate laws is much simpler than a theory with dozens of epicycle.
The dozens of epicycles aren’t on a par with Kepler’s laws. “Planets move in circles plus epicycles” is what you have to compare with Kepler’s laws. “Such-and-such a planet moves in such-and-such a circle plus such-and-such epicycles” is parallel not to Kepler’s laws themselves but to “Such-and-such a planet moves in such-and-such an ellipse, apart from such-and-such further corrections”. If some epicycles are needed in the first case, but no corrections in the second, then Kepler wins. If you need to add corrections to the Keplerian model, either might come out ahead.
(Why would you need corrections in the Keplerian model? Inaccurate observations. Gravitational influences of one planet on another—this is how Neptune was discovered.)
I have heard that Copernican astronomy (circles centred on the sun, plus corrections) ended up needing more epicycles than Ptolemaic (circles centred on the earth, plus corrections) for reasons I don’t know. I think Kepler’s system needed much less correction, but don’t know the details.