My first thought was to reply, “Yes, most worlds may need to be pruned a la Hanson’s mangled worlds, but that doesn’t mean you can end up with a single global world without violating Special Relativity, linearity, unitarity, continuity, CPT invariance, etc.”
But on second thought this seems to be arguing even further, for the sort of deep revolution in QM that Scott wants—a reformulation that would nakedly expose the computational limits, and make the ontology no more extravagant than the fastest computation it can manage within a single world’s quantum computer. So this would have to reduce the proliferation of worlds to sub-exponential, if I understand it correctly, based on the strange reasoning that if we can’t do exponential computations in one world then this should be nakedly revealed in a sub-exponential global universe.
But you still cannot end up with a single world, for all the reasons already given—and quantum computers do not seem to be merely as powerful as classical computers, they do speed things up. So that argues that the ontology should be more than polynomial, even if sub-truly-exponential.
Thanks. I was not aware the Scott has the same concerns based on computational complexity that I have.
I am not even sure that the ontology needs to rely on non-classical capabilities. If our multiverse is a super-sophisticated branch-and-bound type algorithm for some purpose, then it still could be fastest, albeit super-polynomial, algorithm.
Some other thoughts about the MWI, that come to my mind after a bit more thinking:
Here is a version of the Schroedinger’s cat experiment that would let anyone to test the MWI for himself: Let us devise a quantum process that has 99 percent probability of releasing a nerve-gas in a room that kills humans without any pain. If I’d be really sure of the MWI, I would have no problems going into the room and press the button to start the experiment. In my own experience I would simply come out of the room unscratched for certain as it will be the only world I would experience. OTOH, if I really get out of the room as if nothing happened I could deduce with high probability that the MWI is correct. (If not: just repeat the experiment for a couple of times...)
I must admit, I am not really keen on doing the experiment. Why? Am I really so unconvinced about the MWI? What are my reasons not to perform it, even if I’d be 100% sure?
Another variation of the above line of thoght: assume that we are in 2020 and we say that since 2008, year after year, the Large Hadron Collider had all kind of random-looking technical defects that prevented it from performing the planned experiments in the 7Tv scale. Finally a physicist comes up with a convincing calculation showing that the probability that the collider will produce a black hole is much much higher than anticipated and the chances that the earth is destroyed are significant.
Would it be a convincing demonstration of the MWI? Even without the calculation, should we insist on trying to fix the LHC, if we experience the pattern of its breaking down for years?
This is a fascinating way of looking at it.
My first thought was to reply, “Yes, most worlds may need to be pruned a la Hanson’s mangled worlds, but that doesn’t mean you can end up with a single global world without violating Special Relativity, linearity, unitarity, continuity, CPT invariance, etc.”
But on second thought this seems to be arguing even further, for the sort of deep revolution in QM that Scott wants—a reformulation that would nakedly expose the computational limits, and make the ontology no more extravagant than the fastest computation it can manage within a single world’s quantum computer. So this would have to reduce the proliferation of worlds to sub-exponential, if I understand it correctly, based on the strange reasoning that if we can’t do exponential computations in one world then this should be nakedly revealed in a sub-exponential global universe.
But you still cannot end up with a single world, for all the reasons already given—and quantum computers do not seem to be merely as powerful as classical computers, they do speed things up. So that argues that the ontology should be more than polynomial, even if sub-truly-exponential.
Thanks. I was not aware the Scott has the same concerns based on computational complexity that I have.
I am not even sure that the ontology needs to rely on non-classical capabilities. If our multiverse is a super-sophisticated branch-and-bound type algorithm for some purpose, then it still could be fastest, albeit super-polynomial, algorithm.
Don’t know if he does. I just mean that Scott wants a deep revolution in general, not that particular deep revolution.
Some other thoughts about the MWI, that come to my mind after a bit more thinking:
Here is a version of the Schroedinger’s cat experiment that would let anyone to test the MWI for himself: Let us devise a quantum process that has 99 percent probability of releasing a nerve-gas in a room that kills humans without any pain. If I’d be really sure of the MWI, I would have no problems going into the room and press the button to start the experiment. In my own experience I would simply come out of the room unscratched for certain as it will be the only world I would experience. OTOH, if I really get out of the room as if nothing happened I could deduce with high probability that the MWI is correct. (If not: just repeat the experiment for a couple of times...)
I must admit, I am not really keen on doing the experiment. Why? Am I really so unconvinced about the MWI? What are my reasons not to perform it, even if I’d be 100% sure?
Another variation of the above line of thoght: assume that we are in 2020 and we say that since 2008, year after year, the Large Hadron Collider had all kind of random-looking technical defects that prevented it from performing the planned experiments in the 7Tv scale. Finally a physicist comes up with a convincing calculation showing that the probability that the collider will produce a black hole is much much higher than anticipated and the chances that the earth is destroyed are significant.
Would it be a convincing demonstration of the MWI? Even without the calculation, should we insist on trying to fix the LHC, if we experience the pattern of its breaking down for years?
See also:
Wikipedia: quantum suicide
LW/OB: LHC failures