I liked the discussion, especially the final part on the many world interpretation (MWI).
I had the impression that Eliezer had a better understanding of quantum mechanics (QM), however I found one of his remarks very misleading (and it also confused Scott rightly): Eliezer seemed to argue that MWI somehow resolves the difficulty of unifying QM with general relativity (GR) by resolving non-locality.
It is true that non-locality is resolved by Everett’s interpretation, but the real problem with QM+GR is that the renormalization of the gravity wave function does not seem to work out mathematically. At least not in a straightforward manner. However MWI requires gravity to be quantized and therefore forced the physicists to come up with a more elaborate solution.
Anyways, I agree with Eliezer on the other arguments in favor of MWI (linearity, locality, objectivity, etc...), but think that making overreaching remarks rendered his position at least a bit suspect for no good reason.
To be fair: MWI has its own technical quirks (e.g. choice of basis, explanation of probabilities,...) but they don’t seem to be as fundamental as those of the classical interpretation. However the discussion would have been more interesting if Scott could have brought up those points rather than the purely philosophical issues.
Sorry, it seems I was too sloppy, I even must revise my opinion on Scott who seemed to represent a very reasonable point of view although (I agree with you) he tries to conform a bit too much for my taste as well.
Still, I have a very special intutitive suspicions with the WMI: if the physics is so extremely generous and powerful that it spits out all those universes with ease, why does not it allow us to solve exponential problems?
How comes that our world has such a very special physics that it allows us to constructs machines that are slightly more powerful than Turing machines (in an asymptotical sense) still not making exponential (or even NP-complete) problems tractable?
It looks like a strange twist of nature that we have this really special physics that allows us to construct computational processes in this very narrow middle ground in asymptotic complexity. Generating all those exponentially increasing number of universes, but does not allow their inhabitants to exploit them algorithmically to the full extent.
Can’t it be that that our world still has to obey certain complexity limits and some of the universes have to be pruned away for some reason?
My first thought was to reply, “Yes, most worlds may need to be pruned a la Hanson’s mangled worlds, but that doesn’t mean you can end up with a single global world without violating Special Relativity, linearity, unitarity, continuity, CPT invariance, etc.”
But on second thought this seems to be arguing even further, for the sort of deep revolution in QM that Scott wants—a reformulation that would nakedly expose the computational limits, and make the ontology no more extravagant than the fastest computation it can manage within a single world’s quantum computer. So this would have to reduce the proliferation of worlds to sub-exponential, if I understand it correctly, based on the strange reasoning that if we can’t do exponential computations in one world then this should be nakedly revealed in a sub-exponential global universe.
But you still cannot end up with a single world, for all the reasons already given—and quantum computers do not seem to be merely as powerful as classical computers, they do speed things up. So that argues that the ontology should be more than polynomial, even if sub-truly-exponential.
Thanks. I was not aware the Scott has the same concerns based on computational complexity that I have.
I am not even sure that the ontology needs to rely on non-classical capabilities. If our multiverse is a super-sophisticated branch-and-bound type algorithm for some purpose, then it still could be fastest, albeit super-polynomial, algorithm.
Some other thoughts about the MWI, that come to my mind after a bit more thinking:
Here is a version of the Schroedinger’s cat experiment that would let anyone to test the MWI for himself: Let us devise a quantum process that has 99 percent probability of releasing a nerve-gas in a room that kills humans without any pain. If I’d be really sure of the MWI, I would have no problems going into the room and press the button to start the experiment. In my own experience I would simply come out of the room unscratched for certain as it will be the only world I would experience. OTOH, if I really get out of the room as if nothing happened I could deduce with high probability that the MWI is correct. (If not: just repeat the experiment for a couple of times...)
I must admit, I am not really keen on doing the experiment. Why? Am I really so unconvinced about the MWI? What are my reasons not to perform it, even if I’d be 100% sure?
Another variation of the above line of thoght: assume that we are in 2020 and we say that since 2008, year after year, the Large Hadron Collider had all kind of random-looking technical defects that prevented it from performing the planned experiments in the 7Tv scale. Finally a physicist comes up with a convincing calculation showing that the probability that the collider will produce a black hole is much much higher than anticipated and the chances that the earth is destroyed are significant.
Would it be a convincing demonstration of the MWI? Even without the calculation, should we insist on trying to fix the LHC, if we experience the pattern of its breaking down for years?
I liked the discussion, especially the final part on the many world interpretation (MWI).
I had the impression that Eliezer had a better understanding of quantum mechanics (QM), however I found one of his remarks very misleading (and it also confused Scott rightly): Eliezer seemed to argue that MWI somehow resolves the difficulty of unifying QM with general relativity (GR) by resolving non-locality.
It is true that non-locality is resolved by Everett’s interpretation, but the real problem with QM+GR is that the renormalization of the gravity wave function does not seem to work out mathematically. At least not in a straightforward manner. However MWI requires gravity to be quantized and therefore forced the physicists to come up with a more elaborate solution.
Anyways, I agree with Eliezer on the other arguments in favor of MWI (linearity, locality, objectivity, etc...), but think that making overreaching remarks rendered his position at least a bit suspect for no good reason.
To be fair: MWI has its own technical quirks (e.g. choice of basis, explanation of probabilities,...) but they don’t seem to be as fundamental as those of the classical interpretation. However the discussion would have been more interesting if Scott could have brought up those points rather than the purely philosophical issues.
“relativity” was meant to refer to SR not GR
Sorry, it seems I was too sloppy, I even must revise my opinion on Scott who seemed to represent a very reasonable point of view although (I agree with you) he tries to conform a bit too much for my taste as well.
Still, I have a very special intutitive suspicions with the WMI: if the physics is so extremely generous and powerful that it spits out all those universes with ease, why does not it allow us to solve exponential problems?
How comes that our world has such a very special physics that it allows us to constructs machines that are slightly more powerful than Turing machines (in an asymptotical sense) still not making exponential (or even NP-complete) problems tractable?
It looks like a strange twist of nature that we have this really special physics that allows us to construct computational processes in this very narrow middle ground in asymptotic complexity. Generating all those exponentially increasing number of universes, but does not allow their inhabitants to exploit them algorithmically to the full extent.
Can’t it be that that our world still has to obey certain complexity limits and some of the universes have to be pruned away for some reason?
This is a fascinating way of looking at it.
My first thought was to reply, “Yes, most worlds may need to be pruned a la Hanson’s mangled worlds, but that doesn’t mean you can end up with a single global world without violating Special Relativity, linearity, unitarity, continuity, CPT invariance, etc.”
But on second thought this seems to be arguing even further, for the sort of deep revolution in QM that Scott wants—a reformulation that would nakedly expose the computational limits, and make the ontology no more extravagant than the fastest computation it can manage within a single world’s quantum computer. So this would have to reduce the proliferation of worlds to sub-exponential, if I understand it correctly, based on the strange reasoning that if we can’t do exponential computations in one world then this should be nakedly revealed in a sub-exponential global universe.
But you still cannot end up with a single world, for all the reasons already given—and quantum computers do not seem to be merely as powerful as classical computers, they do speed things up. So that argues that the ontology should be more than polynomial, even if sub-truly-exponential.
Thanks. I was not aware the Scott has the same concerns based on computational complexity that I have.
I am not even sure that the ontology needs to rely on non-classical capabilities. If our multiverse is a super-sophisticated branch-and-bound type algorithm for some purpose, then it still could be fastest, albeit super-polynomial, algorithm.
Don’t know if he does. I just mean that Scott wants a deep revolution in general, not that particular deep revolution.
Some other thoughts about the MWI, that come to my mind after a bit more thinking:
Here is a version of the Schroedinger’s cat experiment that would let anyone to test the MWI for himself: Let us devise a quantum process that has 99 percent probability of releasing a nerve-gas in a room that kills humans without any pain. If I’d be really sure of the MWI, I would have no problems going into the room and press the button to start the experiment. In my own experience I would simply come out of the room unscratched for certain as it will be the only world I would experience. OTOH, if I really get out of the room as if nothing happened I could deduce with high probability that the MWI is correct. (If not: just repeat the experiment for a couple of times...)
I must admit, I am not really keen on doing the experiment. Why? Am I really so unconvinced about the MWI? What are my reasons not to perform it, even if I’d be 100% sure?
Another variation of the above line of thoght: assume that we are in 2020 and we say that since 2008, year after year, the Large Hadron Collider had all kind of random-looking technical defects that prevented it from performing the planned experiments in the 7Tv scale. Finally a physicist comes up with a convincing calculation showing that the probability that the collider will produce a black hole is much much higher than anticipated and the chances that the earth is destroyed are significant.
Would it be a convincing demonstration of the MWI? Even without the calculation, should we insist on trying to fix the LHC, if we experience the pattern of its breaking down for years?
See also:
Wikipedia: quantum suicide
LW/OB: LHC failures
You’re asking why we can’t yet build quantum computers?
It may be down to inexperience.