“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t. In QM we don’t observe infinite anything, we observe discrete events. That some of the math to model this involves infinities may be merely a matter of convenience to deal with a universe that may merely have a very large but finite number of voxels (or similar), as suggested by Planck length and similar ideas.
“It’s reasonable to assume run time is important, but problematic to formalize.”
Run time complexity theory (and also memory space complexity, which also grows at least exponentially in MWI) is much easier to apply than Kolmogorov complexity in this context. Kolmogorov complexity only makes sense as an order of magnitude (i.e. O(f(x) not equal to merely a constant), because choice of language adds an (often large) constant to program length. So from Kolmogorov theory it doesn’t much matter than one adds a small extra constant amount of bits to one’s theory, making it problematic to invoke Kolmogorov theory to distinguish between different interpretations and equations that each add only a small constant amount of bits.
(Besides the fact that QM is really wavefunction + nondeterministic Born probability, not merely the nominally deterministic wave function on which MWI folks focus, and everybody needs some “collapse”/”world split” rule for when the nondeterministic event happens, so there really is not even any clear constant factor equation description length parsimony to MWI).
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount. As for the ability to formalize this there’s a big literature of run-time complexity that is similar to, but older and more mature than, the literature on Kolmogorov complexity.
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount.
I see. I think you are making a common misunderstanding of MWI (in fact, a misunderstanding I had for years). There is no actual branching in MWI, so the amount of memory required is constant. There is just a phase space (a very large phase space), and amplitudes at each point in the phase space are constantly flowing around and changing (in a local way).
If you had a computer with as many cores as there are points in the phase space then the simulation would be very snappy. On the other hand, using the same massive computer to simulate a collapse theory would be very slow.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t.
This is an answer to a question from another person’s thread. My question was “When an object leaves our Hubble volume does it cease to exist?” I’m still curious to hear your answer.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”, none of which we have ever observed, either directly or indirectly, and none of which are necessary for a coherent and objective QM theory.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
Then I can define a new hypothesis, call it objective collapse++, which is exactly your objective collapse hypothesis with the added assumption that objects cease to exist outside of our Hubble volume. Collapse++ has a slightly longer description length, but it has a greatly reduced runtime. If we care about runtime length, why would we not prefer Collapse++ over the original collapse hypothesis?
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”
See my above comment about MWI having a fixed phase space that doesn’t actually increase in size over time. The idea of an increasing number of parallel universes is incorrect.
“MWI having a fixed phase space that doesn’t actually increase in size over time.”
(1) That assumes we are already simulating the entire universe from the Big Bang forward, which is already preposterously infeasible (not to mention that we don’t know the starting state).
(2) It doesn’t model the central events in QM, namely the nondeterministic events which in MWI are interpreted as which “world” we “find ourselves” in.
Of course in real QM work simulations are what they are, independently of interpretations, they evolve the wavefunction, or a computationally more efficient but less accurate version of same, to the desired elaboration (which is radically different for different applications). For output they often either graph the whole wavefunction (relying on the viewer of the graph to understand that such a graph corresponds to the results of a very large number of repeated experiments, not to a particular observable outcome) or do a Monte Carlo or Markov simulation of the nondeterministic events which are central to QM. But I’ve never seen a Monte Carlo or Markov simulation of QM that simulates the events that supposedly occur in “other worlds” that we can never observe—it would indeed be exponentially (at least) more wasteful in time and memory, yet utterly pointless, for the same reasons that the interpretation itself is wasteful and pointless. You’d think that a good interpretation, even if it can’t produce any novel experimental predictions, could at least provide ideas for more efficient modeling of the theory, but MWI suggests quite the opposite, gratuitously inefficient ways to simulate a theory that is already extraordinarily expensive to simulate.
Objective collapse, OTOH, continually prunes the possibilities of the phase space and thus suggests exponential improvements in simulation time and memory usage. Indeed, some versions of objective collapse are bone fide new theories of QM, making experimental predictions that distinguish it from the model of perpetual elaboration of a wavefunction. Penrose for example bases his theory on a quantum gravity theory and several experiments have been proposed to test his theory.
BTW, it’s MWI that adds extra postulates. In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another. And all that just to explain why we observe a nondeterministic event, one random eigenstate out of the infinity of eigenstates derived from the wavefunction and operator, instead of observing all of them.
Why not just admit that quantum events are objectively nondeterministic and be done with it? What’s so hard about that?
In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another.
This does not correspond to the MWI as promulgated by Eliezer Yudkowsky, which is more like, “In MWI, parts of the wavefunction effectively disappear from the observable universe—full stop.” My understanding is that EY’s view is that chunks of the wavefunction decohere from one another. The “worlds” of the MWI aren’t something extra imposed on QM; they’re just a useful metaphor for decoherence.
This leaves the Born probabilities totally unexplained. This is the major problem with EY’s MWI, and has been fully acknowledged by him in posts made in years past. It’s not unreasonable that you would be unaware of this, but until you’ve read EY’s MWI posts, I think you’ll be arguing past the other posters on LW.
Upvoted, although my understanding is that there is no difference between Eliezer’s MWI and canonical MWI as originally presented by Everett. Am I mistaken?
Since I’m not familiar with Everett’s original presentation, I don’t know if you’re mistaken. Certainly popular accounts of MWI do seem to talk about “worlds” as something extra on top of QM.
Popular accounts written by journalists who don’t really understand what they are talking about may treat “worlds” as something extra on top of QM, but after reading serious accounts of MWI by advocates for over two decades, I have yet to find any informed advocate who makes that mistake. I am positive that Everett did not make that mistake.
I think that’s just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I’m fairly certain that Eliezer’s presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).
This leaves the Born probabilities totally unexplained.
Mainstream philosophy of science claims to have explained the Born probabilities; Eliezer and some others here disagree with the explanations, but it’s at least worth noting that the quoted claim is controversial among those who have thought deeply about the question.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t. In QM we don’t observe infinite anything, we observe discrete events. That some of the math to model this involves infinities may be merely a matter of convenience to deal with a universe that may merely have a very large but finite number of voxels (or similar), as suggested by Planck length and similar ideas.
“It’s reasonable to assume run time is important, but problematic to formalize.”
Run time complexity theory (and also memory space complexity, which also grows at least exponentially in MWI) is much easier to apply than Kolmogorov complexity in this context. Kolmogorov complexity only makes sense as an order of magnitude (i.e. O(f(x) not equal to merely a constant), because choice of language adds an (often large) constant to program length. So from Kolmogorov theory it doesn’t much matter than one adds a small extra constant amount of bits to one’s theory, making it problematic to invoke Kolmogorov theory to distinguish between different interpretations and equations that each add only a small constant amount of bits.
(Besides the fact that QM is really wavefunction + nondeterministic Born probability, not merely the nominally deterministic wave function on which MWI folks focus, and everybody needs some “collapse”/”world split” rule for when the nondeterministic event happens, so there really is not even any clear constant factor equation description length parsimony to MWI).
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount. As for the ability to formalize this there’s a big literature of run-time complexity that is similar to, but older and more mature than, the literature on Kolmogorov complexity.
I see. I think you are making a common misunderstanding of MWI (in fact, a misunderstanding I had for years). There is no actual branching in MWI, so the amount of memory required is constant. There is just a phase space (a very large phase space), and amplitudes at each point in the phase space are constantly flowing around and changing (in a local way).
If you had a computer with as many cores as there are points in the phase space then the simulation would be very snappy. On the other hand, using the same massive computer to simulate a collapse theory would be very slow.
This is an answer to a question from another person’s thread. My question was “When an object leaves our Hubble volume does it cease to exist?” I’m still curious to hear your answer.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”, none of which we have ever observed, either directly or indirectly, and none of which are necessary for a coherent and objective QM theory.
Then I can define a new hypothesis, call it objective collapse++, which is exactly your objective collapse hypothesis with the added assumption that objects cease to exist outside of our Hubble volume. Collapse++ has a slightly longer description length, but it has a greatly reduced runtime. If we care about runtime length, why would we not prefer Collapse++ over the original collapse hypothesis?
See my above comment about MWI having a fixed phase space that doesn’t actually increase in size over time. The idea of an increasing number of parallel universes is incorrect.
“MWI having a fixed phase space that doesn’t actually increase in size over time.”
(1) That assumes we are already simulating the entire universe from the Big Bang forward, which is already preposterously infeasible (not to mention that we don’t know the starting state).
(2) It doesn’t model the central events in QM, namely the nondeterministic events which in MWI are interpreted as which “world” we “find ourselves” in.
Of course in real QM work simulations are what they are, independently of interpretations, they evolve the wavefunction, or a computationally more efficient but less accurate version of same, to the desired elaboration (which is radically different for different applications). For output they often either graph the whole wavefunction (relying on the viewer of the graph to understand that such a graph corresponds to the results of a very large number of repeated experiments, not to a particular observable outcome) or do a Monte Carlo or Markov simulation of the nondeterministic events which are central to QM. But I’ve never seen a Monte Carlo or Markov simulation of QM that simulates the events that supposedly occur in “other worlds” that we can never observe—it would indeed be exponentially (at least) more wasteful in time and memory, yet utterly pointless, for the same reasons that the interpretation itself is wasteful and pointless. You’d think that a good interpretation, even if it can’t produce any novel experimental predictions, could at least provide ideas for more efficient modeling of the theory, but MWI suggests quite the opposite, gratuitously inefficient ways to simulate a theory that is already extraordinarily expensive to simulate.
Objective collapse, OTOH, continually prunes the possibilities of the phase space and thus suggests exponential improvements in simulation time and memory usage. Indeed, some versions of objective collapse are bone fide new theories of QM, making experimental predictions that distinguish it from the model of perpetual elaboration of a wavefunction. Penrose for example bases his theory on a quantum gravity theory and several experiments have been proposed to test his theory.
BTW, it’s MWI that adds extra postulates. In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another. And all that just to explain why we observe a nondeterministic event, one random eigenstate out of the infinity of eigenstates derived from the wavefunction and operator, instead of observing all of them.
Why not just admit that quantum events are objectively nondeterministic and be done with it? What’s so hard about that?
This does not correspond to the MWI as promulgated by Eliezer Yudkowsky, which is more like, “In MWI, parts of the wavefunction effectively disappear from the observable universe—full stop.” My understanding is that EY’s view is that chunks of the wavefunction decohere from one another. The “worlds” of the MWI aren’t something extra imposed on QM; they’re just a useful metaphor for decoherence.
This leaves the Born probabilities totally unexplained. This is the major problem with EY’s MWI, and has been fully acknowledged by him in posts made in years past. It’s not unreasonable that you would be unaware of this, but until you’ve read EY’s MWI posts, I think you’ll be arguing past the other posters on LW.
Upvoted, although my understanding is that there is no difference between Eliezer’s MWI and canonical MWI as originally presented by Everett. Am I mistaken?
Since I’m not familiar with Everett’s original presentation, I don’t know if you’re mistaken. Certainly popular accounts of MWI do seem to talk about “worlds” as something extra on top of QM.
Popular accounts written by journalists who don’t really understand what they are talking about may treat “worlds” as something extra on top of QM, but after reading serious accounts of MWI by advocates for over two decades, I have yet to find any informed advocate who makes that mistake. I am positive that Everett did not make that mistake.
I think that’s just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I’m fairly certain that Eliezer’s presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).
Mainstream philosophy of science claims to have explained the Born probabilities; Eliezer and some others here disagree with the explanations, but it’s at least worth noting that the quoted claim is controversial among those who have thought deeply about the question.
Good to know.