It’s not about transmitting information into the past—it’s about the locality of causality. Consider Judea Pearl’s classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of “causality” is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS—the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.
Right now, we have a physics where (if you don’t believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.
In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around, I can predict the future of that point one nanosecond later. And so on to the continuous limit: the causal factors determining a point’s infinitesimal future are screened off by knowing an infinitesimal spatial neighborhood of its ancestors.
This is the obvious analogue of Judea Pearl’s Causality for continuous time; instead of discrete causal graphs, you have a continuous metric of relatedness (space) which shrinks to an infinitesimal neighborhood as you consider infinitesimal causal succession (time).
This, in turn, implies the existence of a fundamental constant describing how the neighborhood of causally related space shrinks as time diminishes, to preserve the locality of causal relatedness in a continuous physics.
This constant is, obviously, c.
I’ve never read this anywhere else, by the way. It clearly isn’t universally understood, because if all physicists understood the universe in these terms, none of them would believe in a “collapse of the wavefunction”, which is not locally related in the configuration space. I would be surprised neither to find that the above statement is original, nor that it has been said before.
I am attempting to bet that physics still looks like this after the dust settles. It’s a stronger condition than global noncircularity of time—not all models with globally noncircular time have local causality.
If violating Lorentz invariance means that physics no longer looks like this, then I will bet at 99-to-1 odds against violations of Lorentz invariance. But I can’t make out from the Wikipedia pages whether Lorentz violations mean the end of local causality (which I’ll bet against) or if they’re random weird physics (which I won’t bet against).
I am also willing to bet that the fundamental constant c as it appears in multiple physical equations is the constant of time/space locality, i.e., the constant we know as c is fundamentally the shrinking constant by which an infinitesimal neighborhood in space causally determines an infinitesimal future in time. I am willing to lose the bet if there’s still locality but the real size of the infinitesimal spatial neighborhood goes as 2c rather than c (though I’m not actually sure whether that statement is even meaningful in a Lorentz-invariant universe) and therefore you can use neutrinos to transmit information at up to twice the speed of light, but no faster. The clues saying that c is the fundamental constant that we should expect to see in any continuous analogue of a locally causal universe, are strong enough that I’ll bet on them at 99-to-1 odds.
What I can’t make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly.
I would happily amend the bet to be annulled in the case that any more complicated definition of c is adopted by which there is still a constant of time/space locality in causal propagation, but it makes photons and neutrinos move at different speeds.
The trouble is that physicists don’t read books like Causality and don’t understand local causality as part of the apparent character of physical law, which is why some of them still believe in the “collapse of the wavefunction”—it would be an exceptional physicist whom we could simply ask whether the Standard Model Extension preserves locally continuous causality with c as the neighborhood-size constant.
This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.
And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I’m not trying to do that, just read off the style of the existing model). But anyway, I’d pay $20,000 to find out I’m that wrong—what I want to eliminate is the possibility of paying $20,000 to find out I’m right.
You need to distinguish different notions of local causality. SR implies in most forms a very strong form of local causality that you seem to be using here. But it is important to note that very well behaved systems can not obey this, and it isn’t just weird systems. For example, a purely Newtonian universe won’t obey this sort of strong local causality. A particle from far away can have arbitrarily high velocity and smack into the region we care about. The fact that such well behaved systems are ok with weaker forms of local causality suggests that we shouldn’t assign such importance to local causality.
What I can’t make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly
This isn’t a well-defined question. It depends very much on what sort of Lorentz violation you are talking about. Imagine that you are working in a Newtonian framework and someone asks “well, if gravity doesn’t always decrease at a 1/r^2 rate, will the three body problem still be hard?” The problem is that the set of systems which violate Lorentz is so large that saying this isn’t that helpful.
The trouble is that physicists don’t read books like Causality and don’t understand local causality as part of the apparent character of physical law,
The vast majority of physicists aren’t thinking about how to do things that replace the fundamental laws with other fundamental more unifying laws. The everday work of physicists is stuff like trying to measure the rest mass of elementary particles more precisely, or being better able to predict the properties of pure water near a transition state, or trying to better model the behavior of high temperature superconductors. They don’t have reason to think about these issues. But even if they did, they probably wouldn’t take these sorts of ideas as seriously as you do. Among other problems, strong local causality is something which appeals to a set of intuitions. And humans are notoriously bad at intuiting how the universe behaves. We evolved to get mates and avoid tigers, not to be able to intuit the details of the causal structure of reality.
It clearly isn’t universally understood, because if all physicists understood the universe in these terms, none of them would believe in a “collapse of the wavefunction”, which is not locally related in the configuration space.
And just like that, Many-Worlds clicked for me. It’s now incredibly obvious just how preposterous waveform collapse is, and this new intuitive mental model clears up a lot of the frustrating sticking points I was having with QM. C as the speed limit of information in the universe and the notion of local causality have all been a native part of my view of the universe for a while, but it wasn’t until that sentence that I connected them to decoherence.
Edit: Wow, a lot more things just clicked, including quantum suicide. My priority of cyronics just shot up several orders of magnitude, and I’m going to sign up once I’ve graduated and start bringing in income. Eliezer, if you have never seen The Prestige, I recommend you go and watch it. It provides a nice allegory for MW/quantum suicide that I think a lot of lay-people will be able to connect to easily. Could help when you’re explaining things.
Edit2: Just read your cyronics 101, and while the RIGHT NOW message punctured through my akrasia, I looked it up and even the $310/yr is not affordable right now. However, it’s far more affordable than I had thought and in a couple months I should be in a position where this becomes sustainably possible.
By the way, thank you. You probably know this on an intuitive level, but it should be good to hear that your work may very well be saving lives.
Username, you’re having a small conversion experience here, going from “causality is local” to “wavefunction collapse is preposterous” to “I understand quantum suicide” to “I’d better sign up for cryonics once I graduate” in rapid succession. It’s a shame we can’t freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study.
This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn’t have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-)
ETA: Some more remarks:
Locality does not imply a maximum speed. Locality just means that causes don’t jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the “next moment”; some influence moving at a googol light-years per second might come crashing in upon us.)
Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let’s say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the “points” themselves are whole configurations. “Locality” here refers to similarity between whole configurations. It means that the amplitude for a whole configuration is only immediately influenced by the amplitudes for infinitesimally different whole configurations.
Suppose we’re talking about a configuration in which there are two atoms, A and B, separated by a light-year. The amplitude for that configuration (in an evolving wavefunction) will be affected by the amplitude for a configuration which differs slightly at atom A, and also by the amplitude for a configuration which differs slightly at atom B, a light-year away from A. This is where the indirect nonlocality of QM comes from—if you think of QM in terms of amplitude flows in configuration space: you are attaching single amplitudes to extended objects—arbitrarily large configurations—and amplitude changes can come from very different “directions” in configuration space.
Eliezer also talks about amplitudes for subconfigurations. He wants to be able to say that what happens at a point only depends on its immediate environment. But if you want to talk like this, you have to retreat from talking about specific configurations, and instead talk about regions of space, and the quantum state of a “region of space”, which will associate an amplitude with every possible subconfiguration confined to that region.
This is an important consideration for MWI, evaluated from a relativistic perspective, because relativity implies that a “configuration” is not a fundamental element of reality. A configuration is based on a particular slicing of space-time into equal-time hypersurfaces, and in relativity, no such slicing is to be preferred as ontologically superior to all others. Ultimately that means that only space-time points, and the relations between them (spacelike, lightlike, timelike) are absolute; assembling sets of points into spacelike hypersurfaces is picking a particular reference frame.
This causes considerable problems if you want to reify quantum wavefunctions—treat them as reality, rather than as constructs akin to probability distributions—because (for any region of space bigger than a point) they are always based on a particular hypersurface, and therefore a particular notion of simultaneity; so to reify the wavefunction is to say that the reference frame in which it is defined is ontologically preferred. So then you could say, all right, we’ll just talk about wavefunctions based at a point. But building up an extended wavefunction from just local information is not a simple matter. The extended wavefunction will contain entanglement but the local information says nothing about entanglement. So the entanglement has to come from how you “combine” the wavefunctions based at points. Potentially, for any n points that are spacelike with respect to each other, there will need to be “entanglement information” on how to assemble them as part of a wavefunction for configurations.
I don’t know where that line of thought takes you. But in ordinary Copenhagen QM, applied to QFT, this just doesn’t even come up, because you treat space-time, and particular events in space-time, as the reality, and wavefunctions, superpositions, sums over histories, etc, as just a method of obtaining probabilities about reality. Copenhagen is unsatisfactory as an ontological picture because it glosses over the question of why QM works and of what happens in between one “definite event” and the next. But the attempt to go to the opposite interpretive pole, and say “OK, the wavefunction IS reality” is not a simple answer to your philosophical problems either; instead, it’s the beginning of a whole new set of problems, including, how do you reify wavefunctions without running foul of relativity?
Returning to Eliezer’s argument, which purports to derive the existence of a causal speed-limit from a postulate of “locality”: my critique is as informal and inexact as his argument, but perhaps I’ve at least shown that this is not as simple a matter as it may appear to the uninformed reader. There are formidable conceptual problems involved just in getting started with such an argument. Eliezer has the essentials needed to think about these topics rigorously, but he’s passing over crucial details, and he may thereby be overlooking a hole in his intuitions. In mathematics, you may start out with a reasonable belief that certain objects always behave in a certain way, but then when you examine specifics, you discover a class of cases which work in a way you didn’t anticipate.
What if you have a field theory with no speed limit, but in which significant and ultra-fast-moving influences are very rare; so that you have an effective “locality” (in Eliezer’s sense), with a long tail of very rare disruptions? Would Eliezer consider that a disproof of his intuitive idea, or an exception which didn’t sully the correctness of the individual insight? I have no idea. But I can say that the literature of physics is full of bogus derivations of special relativity, the Born rule, the three-dimensionality of space, etc. This derivation of “c” from Pearlian causal locality certainly has the ingredients necessary for such a bogus derivation. The way to make it non-bogus is to make it deductively valid, rather than just intuitive. This means that you have to identify and spell out all the assumptions required for the deduction.
This may or may not be the result of day 2 of modafinil. :) I don’t think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point.
Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I’m not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman’s thinking about balls. I’m going to have to do a lot more reading as my QM is very shakey, and I want to shore this up. It will also probably take a while until this way of thinking becomes the natural way I see the universe. But it all lines up, makes sense, and aligns with what people smarter than me are saying, so I’m assigning a high probability that it’s the correct conclusion.
An upper speed limit doesn’t matter—all that matters is that things are not instantaneous for locality to be valid.
A conversion experience is a very appropriate term for what I’m going through. I’m having very mixed emotions right now. A lot of my thoughts just clarified, which simply feels good. I’m grateful, because I live in an era where this is possible and because I was born intelligent enough to understand. Sad, because I know that most if not all of the people I know will never understand, and never sign up for cyronics. But I’m also ecstatic, because I’ve just discovered the cheat code to the universe, and it works.
I just made a long-winded addition to my comment, expanding on some of the gaps in Eliezer’s reasoning.
I’m also ecstatic, because I’ve just discovered the cheat code to the universe, and it works.
Well, you’re certainly not backing down and saying, hang on, is this just an illusory high? It almost seems inappropriate to dump cold water on you precisely when you’re having your satori—though it’s interesting from an experimental perspective. I’ve never had the opportunity to meddle with someone who thinks they are receiving enlightenment, right at the moment when it’s happening; unless I count myself.
From my perspective, QM is far more likely to be derived from ’t Hooft’s holographic determinism, and the idea of personal identity as a fungible pattern is just (in historical terms) a fad resulting from the incomplete state of our science, so I certainly regard your excitement as based mostly on an illusion. It’s good that you’re having exciting ideas and new thoughts, and perhaps it’s even appropriate to will yourself to believe them, because that’s a way of testing them against the whole of the rest of your experience.
But I still find it interesting how it is that people come to think that they know something new, when they don’t actually know it. How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
From experiences back when I was young and religious, I’ve learned to recognize moments of satori as not much more than a high (have probably had 2-3 prior). I enjoy the experience, but I’ve learned skepticism and try not to place too much weight on them. I was more describing the causes for my emotional states rather than proclaiming new beliefs. But to be completely honest, for several minutes I was convinced that I had found the tree of life, so I won’t completely downplay what I wrote.
How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
I suspect it has evopsych roots relating to confidence, the measured benefits of a life with purpose, and good-enough knowledge.
Reading ‘t Hooft’s paper I could understand what he was saying, but I’m realizing that the physics is out of my current depth. And I understand the argument you explained about the flaws in spatial (as opposed to configuration) locality. I’ll update my statement that ‘Many-Worlds is intuitively correct’ to ‘Copenhagen is intuitively wrong,’ which I suppose is where my original logic should have taken me—I just didn’t consider strong MWI alternatives. Determinism kills quantum suicide, so I’ll have to move down the priority of cyronics (though the ‘if MWI then quantum suicide then cyronics’ logic still holds and I still think cyronics is a good idea. I do love me a good hedge bet). But like I said, I’m not at all qualified to start assigning likelyhoods here between different QM origins. This requires more study.
I don’t see the issue with consciousness as being represented by the pattern of our brains rather than the physicality of it. You are right that we may eventually find that we can never look at a brain with high enough resolution to emulate it. But based on cases of people entering a several-hour freeze before being revived, the consciousness mechanism is obviously robust and I say this points towards it being an engineering problem of getting everything correct enough. The viability of putting it on a computer once you have a high enough resolution scan is not an issue—worst case scenario you start from something like QM and work up. Again this assumes a level of the brain’s robustness (rounding errors shouldn’t crash the mind), but I would call that experimentally proven in today’s humans.
It’s not about transmitting information into the past—it’s about the locality of causality. Consider Judea Pearl’s classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of “causality” is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS—the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.
Right now, we have a physics where (if you don’t believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.
In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around, I can predict the future of that point one nanosecond later. And so on to the continuous limit: the causal factors determining a point’s infinitesimal future are screened off by knowing an infinitesimal spatial neighborhood of its ancestors.
This is the obvious analogue of Judea Pearl’s Causality for continuous time; instead of discrete causal graphs, you have a continuous metric of relatedness (space) which shrinks to an infinitesimal neighborhood as you consider infinitesimal causal succession (time).
This, in turn, implies the existence of a fundamental constant describing how the neighborhood of causally related space shrinks as time diminishes, to preserve the locality of causal relatedness in a continuous physics.
This constant is, obviously, c.
I’ve never read this anywhere else, by the way. It clearly isn’t universally understood, because if all physicists understood the universe in these terms, none of them would believe in a “collapse of the wavefunction”, which is not locally related in the configuration space. I would be surprised neither to find that the above statement is original, nor that it has been said before.
I am attempting to bet that physics still looks like this after the dust settles. It’s a stronger condition than global noncircularity of time—not all models with globally noncircular time have local causality.
If violating Lorentz invariance means that physics no longer looks like this, then I will bet at 99-to-1 odds against violations of Lorentz invariance. But I can’t make out from the Wikipedia pages whether Lorentz violations mean the end of local causality (which I’ll bet against) or if they’re random weird physics (which I won’t bet against).
I am also willing to bet that the fundamental constant c as it appears in multiple physical equations is the constant of time/space locality, i.e., the constant we know as c is fundamentally the shrinking constant by which an infinitesimal neighborhood in space causally determines an infinitesimal future in time. I am willing to lose the bet if there’s still locality but the real size of the infinitesimal spatial neighborhood goes as 2c rather than c (though I’m not actually sure whether that statement is even meaningful in a Lorentz-invariant universe) and therefore you can use neutrinos to transmit information at up to twice the speed of light, but no faster. The clues saying that c is the fundamental constant that we should expect to see in any continuous analogue of a locally causal universe, are strong enough that I’ll bet on them at 99-to-1 odds.
What I can’t make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly.
I would happily amend the bet to be annulled in the case that any more complicated definition of c is adopted by which there is still a constant of time/space locality in causal propagation, but it makes photons and neutrinos move at different speeds.
The trouble is that physicists don’t read books like Causality and don’t understand local causality as part of the apparent character of physical law, which is why some of them still believe in the “collapse of the wavefunction”—it would be an exceptional physicist whom we could simply ask whether the Standard Model Extension preserves locally continuous causality with c as the neighborhood-size constant.
This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.
And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I’m not trying to do that, just read off the style of the existing model). But anyway, I’d pay $20,000 to find out I’m that wrong—what I want to eliminate is the possibility of paying $20,000 to find out I’m right.
You need to distinguish different notions of local causality. SR implies in most forms a very strong form of local causality that you seem to be using here. But it is important to note that very well behaved systems can not obey this, and it isn’t just weird systems. For example, a purely Newtonian universe won’t obey this sort of strong local causality. A particle from far away can have arbitrarily high velocity and smack into the region we care about. The fact that such well behaved systems are ok with weaker forms of local causality suggests that we shouldn’t assign such importance to local causality.
This isn’t a well-defined question. It depends very much on what sort of Lorentz violation you are talking about. Imagine that you are working in a Newtonian framework and someone asks “well, if gravity doesn’t always decrease at a 1/r^2 rate, will the three body problem still be hard?” The problem is that the set of systems which violate Lorentz is so large that saying this isn’t that helpful.
The vast majority of physicists aren’t thinking about how to do things that replace the fundamental laws with other fundamental more unifying laws. The everday work of physicists is stuff like trying to measure the rest mass of elementary particles more precisely, or being better able to predict the properties of pure water near a transition state, or trying to better model the behavior of high temperature superconductors. They don’t have reason to think about these issues. But even if they did, they probably wouldn’t take these sorts of ideas as seriously as you do. Among other problems, strong local causality is something which appeals to a set of intuitions. And humans are notoriously bad at intuiting how the universe behaves. We evolved to get mates and avoid tigers, not to be able to intuit the details of the causal structure of reality.
And just like that, Many-Worlds clicked for me. It’s now incredibly obvious just how preposterous waveform collapse is, and this new intuitive mental model clears up a lot of the frustrating sticking points I was having with QM. C as the speed limit of information in the universe and the notion of local causality have all been a native part of my view of the universe for a while, but it wasn’t until that sentence that I connected them to decoherence.
Edit: Wow, a lot more things just clicked, including quantum suicide. My priority of cyronics just shot up several orders of magnitude, and I’m going to sign up once I’ve graduated and start bringing in income.
Eliezer, if you have never seen The Prestige, I recommend you go and watch it. It provides a nice allegory for MW/quantum suicide that I think a lot of lay-people will be able to connect to easily. Could help when you’re explaining things.
Edit2: Just read your cyronics 101, and while the RIGHT NOW message punctured through my akrasia, I looked it up and even the $310/yr is not affordable right now. However, it’s far more affordable than I had thought and in a couple months I should be in a position where this becomes sustainably possible.
By the way, thank you. You probably know this on an intuitive level, but it should be good to hear that your work may very well be saving lives.
Username, you’re having a small conversion experience here, going from “causality is local” to “wavefunction collapse is preposterous” to “I understand quantum suicide” to “I’d better sign up for cryonics once I graduate” in rapid succession. It’s a shame we can’t freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study.
This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn’t have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-)
ETA: Some more remarks:
Locality does not imply a maximum speed. Locality just means that causes don’t jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the “next moment”; some influence moving at a googol light-years per second might come crashing in upon us.)
Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let’s say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the “points” themselves are whole configurations. “Locality” here refers to similarity between whole configurations. It means that the amplitude for a whole configuration is only immediately influenced by the amplitudes for infinitesimally different whole configurations.
Suppose we’re talking about a configuration in which there are two atoms, A and B, separated by a light-year. The amplitude for that configuration (in an evolving wavefunction) will be affected by the amplitude for a configuration which differs slightly at atom A, and also by the amplitude for a configuration which differs slightly at atom B, a light-year away from A. This is where the indirect nonlocality of QM comes from—if you think of QM in terms of amplitude flows in configuration space: you are attaching single amplitudes to extended objects—arbitrarily large configurations—and amplitude changes can come from very different “directions” in configuration space.
Eliezer also talks about amplitudes for subconfigurations. He wants to be able to say that what happens at a point only depends on its immediate environment. But if you want to talk like this, you have to retreat from talking about specific configurations, and instead talk about regions of space, and the quantum state of a “region of space”, which will associate an amplitude with every possible subconfiguration confined to that region.
This is an important consideration for MWI, evaluated from a relativistic perspective, because relativity implies that a “configuration” is not a fundamental element of reality. A configuration is based on a particular slicing of space-time into equal-time hypersurfaces, and in relativity, no such slicing is to be preferred as ontologically superior to all others. Ultimately that means that only space-time points, and the relations between them (spacelike, lightlike, timelike) are absolute; assembling sets of points into spacelike hypersurfaces is picking a particular reference frame.
This causes considerable problems if you want to reify quantum wavefunctions—treat them as reality, rather than as constructs akin to probability distributions—because (for any region of space bigger than a point) they are always based on a particular hypersurface, and therefore a particular notion of simultaneity; so to reify the wavefunction is to say that the reference frame in which it is defined is ontologically preferred. So then you could say, all right, we’ll just talk about wavefunctions based at a point. But building up an extended wavefunction from just local information is not a simple matter. The extended wavefunction will contain entanglement but the local information says nothing about entanglement. So the entanglement has to come from how you “combine” the wavefunctions based at points. Potentially, for any n points that are spacelike with respect to each other, there will need to be “entanglement information” on how to assemble them as part of a wavefunction for configurations.
I don’t know where that line of thought takes you. But in ordinary Copenhagen QM, applied to QFT, this just doesn’t even come up, because you treat space-time, and particular events in space-time, as the reality, and wavefunctions, superpositions, sums over histories, etc, as just a method of obtaining probabilities about reality. Copenhagen is unsatisfactory as an ontological picture because it glosses over the question of why QM works and of what happens in between one “definite event” and the next. But the attempt to go to the opposite interpretive pole, and say “OK, the wavefunction IS reality” is not a simple answer to your philosophical problems either; instead, it’s the beginning of a whole new set of problems, including, how do you reify wavefunctions without running foul of relativity?
Returning to Eliezer’s argument, which purports to derive the existence of a causal speed-limit from a postulate of “locality”: my critique is as informal and inexact as his argument, but perhaps I’ve at least shown that this is not as simple a matter as it may appear to the uninformed reader. There are formidable conceptual problems involved just in getting started with such an argument. Eliezer has the essentials needed to think about these topics rigorously, but he’s passing over crucial details, and he may thereby be overlooking a hole in his intuitions. In mathematics, you may start out with a reasonable belief that certain objects always behave in a certain way, but then when you examine specifics, you discover a class of cases which work in a way you didn’t anticipate.
What if you have a field theory with no speed limit, but in which significant and ultra-fast-moving influences are very rare; so that you have an effective “locality” (in Eliezer’s sense), with a long tail of very rare disruptions? Would Eliezer consider that a disproof of his intuitive idea, or an exception which didn’t sully the correctness of the individual insight? I have no idea. But I can say that the literature of physics is full of bogus derivations of special relativity, the Born rule, the three-dimensionality of space, etc. This derivation of “c” from Pearlian causal locality certainly has the ingredients necessary for such a bogus derivation. The way to make it non-bogus is to make it deductively valid, rather than just intuitive. This means that you have to identify and spell out all the assumptions required for the deduction.
This may or may not be the result of day 2 of modafinil. :) I don’t think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point.
Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I’m not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman’s thinking about balls. I’m going to have to do a lot more reading as my QM is very shakey, and I want to shore this up. It will also probably take a while until this way of thinking becomes the natural way I see the universe. But it all lines up, makes sense, and aligns with what people smarter than me are saying, so I’m assigning a high probability that it’s the correct conclusion.
An upper speed limit doesn’t matter—all that matters is that things are not instantaneous for locality to be valid.
A conversion experience is a very appropriate term for what I’m going through. I’m having very mixed emotions right now. A lot of my thoughts just clarified, which simply feels good. I’m grateful, because I live in an era where this is possible and because I was born intelligent enough to understand. Sad, because I know that most if not all of the people I know will never understand, and never sign up for cyronics. But I’m also ecstatic, because I’ve just discovered the cheat code to the universe, and it works.
I just made a long-winded addition to my comment, expanding on some of the gaps in Eliezer’s reasoning.
Well, you’re certainly not backing down and saying, hang on, is this just an illusory high? It almost seems inappropriate to dump cold water on you precisely when you’re having your satori—though it’s interesting from an experimental perspective. I’ve never had the opportunity to meddle with someone who thinks they are receiving enlightenment, right at the moment when it’s happening; unless I count myself.
From my perspective, QM is far more likely to be derived from ’t Hooft’s holographic determinism, and the idea of personal identity as a fungible pattern is just (in historical terms) a fad resulting from the incomplete state of our science, so I certainly regard your excitement as based mostly on an illusion. It’s good that you’re having exciting ideas and new thoughts, and perhaps it’s even appropriate to will yourself to believe them, because that’s a way of testing them against the whole of the rest of your experience.
But I still find it interesting how it is that people come to think that they know something new, when they don’t actually know it. How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
From experiences back when I was young and religious, I’ve learned to recognize moments of satori as not much more than a high (have probably had 2-3 prior). I enjoy the experience, but I’ve learned skepticism and try not to place too much weight on them. I was more describing the causes for my emotional states rather than proclaiming new beliefs. But to be completely honest, for several minutes I was convinced that I had found the tree of life, so I won’t completely downplay what I wrote.
I suspect it has evopsych roots relating to confidence, the measured benefits of a life with purpose, and good-enough knowledge.
Reading ‘t Hooft’s paper I could understand what he was saying, but I’m realizing that the physics is out of my current depth. And I understand the argument you explained about the flaws in spatial (as opposed to configuration) locality. I’ll update my statement that ‘Many-Worlds is intuitively correct’ to ‘Copenhagen is intuitively wrong,’ which I suppose is where my original logic should have taken me—I just didn’t consider strong MWI alternatives. Determinism kills quantum suicide, so I’ll have to move down the priority of cyronics (though the ‘if MWI then quantum suicide then cyronics’ logic still holds and I still think cyronics is a good idea. I do love me a good hedge bet). But like I said, I’m not at all qualified to start assigning likelyhoods here between different QM origins. This requires more study.
I don’t see the issue with consciousness as being represented by the pattern of our brains rather than the physicality of it. You are right that we may eventually find that we can never look at a brain with high enough resolution to emulate it. But based on cases of people entering a several-hour freeze before being revived, the consciousness mechanism is obviously robust and I say this points towards it being an engineering problem of getting everything correct enough. The viability of putting it on a computer once you have a high enough resolution scan is not an issue—worst case scenario you start from something like QM and work up. Again this assumes a level of the brain’s robustness (rounding errors shouldn’t crash the mind), but I would call that experimentally proven in today’s humans.
Note also that some of the recent papers do explicitly discuss causality issues. See e.g. this one.