I’ll take bets at 99-to-1 odds against any information propagating faster than c. Note that this is not a bet for the results being methodologically flawed in any particular way, though I would indeed guess some simple flaw. It is just a bet that when the dust settles, it will not be possible to send signals at a superluminal velocity using whatever is going on—that there will be no propagation of any cause-and-effect relation at faster than lightspeed.
My real probability is lower, but I think that anyone who’d bet against me at 999-to-1 will probably also bet at 99-to-1, so 99-to-1 is all I’m offering.
I will not accept more than $20,000 total of such bets.
c is the constant as it appears in fundamental physical equations, relativistic or quantum. Anything slowing down the propagation of photons through an apparent vacuum (such as interaction with dark matter) which did not affect, for example, the mass-energy equivalence of E=MC2, would not win the bet.
I suggest clarifying the bet to say “information propagating faster than c as c is defined at the time of this bet”. With that clarification, I can pay up front in cash for $202 as soon as possible.
There are many definitions of c—it appears as a constant in many different physical equations. Right now, all of these definitions are consistent. If you have a new physics where all these definitions remain consistent and you can still transmit information faster than c, then certainly I have lost the bet. Other cases would be harder to settle—I did state that weird physics along the lines of “this is why photons are slowed down in a vacuum by dark matter, but neutrinos aren’t slowed” wouldn’t win the bet.
Minerva remembered what Harry had told her… how people were usually too optimistic, even when they thought they were being pessimistic. It was the sort of information that preyed on your mind, dwelling in it and spinning off nightmares...
Actually, what is the worst that could happen? It’s not [the structure of the universe is destabilized by the breakdown of causality], because that would have already happened if it were going to.
The obvious one would be [Eliezer loses $20,000], except that would only occur in the event that it were possible to violate causality, in which case he would presumably arrange to prevent his past self from making the bet in the first place, yeah? So really, it’s a win-win.
Unless one of the people betting against him is doing so because ve received a mysterious parchment on which was written, in ver own hand, “MESS WITH TIME.”
If there are ways to violate causality they are likely restrictive enough that we can’t use them to violate causality prior to when we knew about the methods (roughly). This is true for most proposed causality violating mechanisms. For example, you might be able to violate causality with a wormhole, but you can’t do it to any point in spacetime prior to the existence of the wormhole.
In general, if there are causality violating mechanisms we should expect that they can’t violate causality so severely as to make the past become radically altered since we just don’t see that. It is conceivable that such manipulation is possible but that once we find an effective method of violating causality we will be quickly wiped out (possibly by bad things related to the method itself) but this seems unlikely even assuming one already has a causality violating mechanism.
Mostly agree. Would downgrade to “can’t or won’t”. Apart from a little more completeness the difference makes a difference to anthropic considerations.
Does it even make sense to say “won’t”, or for that matter bring up anthropic considerations, in reference to causality violation?
I’m not sure. If a universe allows sufficient causality violation then it may be that it will be too unstable for observers to arise in that universe. But I’m not sure about that. This may be causality chauvinism.
(I feel like there’s a joke to be made here, something to do with “causality chauvinism”, “causality violation”, “too unstable for observers to arise”, the relative “looseness” of time travel rules, maybe also the “Big Bang”… it’s on the tip of my brain… nah, I got nothing.)
Does it even make sense to say “won’t” [...] in reference to causality violation?
Yes. (Leave out the anthropics, when that makes sense to bring up is complicated.)
Most of the reason for saying:
If there are ways to violate causality they are likely restrictive enough that we can’t use them to violate causality prior to when we knew about the methods (roughly).
… are somewhat related to “causality doesn’t appear to be violated”. If (counterfactually) causality can be violated then it seems like it probably hasn’t happened yet. This makes it a lot more likely that causality violations (like wormholes and magic) that are discovered in the future will not affect things before their discovery. This includes the set of (im)possible worlds in which prior-to-the-magic times cannot be interfered with and also some other (im)possible worlds in which it is possible but doesn’t happen because it is hard.
An example would be faster-than-light neutrinos. It would be really damn hard to influence the past significantly with such neutrinos with nothing set up to catch them. It would be much easier to set up a machine to receive messages from the future.
It may be worth noting that “causality violation” does not imply “complete causality meltdown”. The latter would definitely make “won’t” rather useless.
Well, it’s just… how could you tell? I mean, maybe the angel that told Colombo to sail west was a time-travelling hologram sent to avert the Tlaxcalan conquest of Europe.
An example would be faster-than-light neutrinos. It would be really damn hard to influence the past significantly with such neutrinos with nothing set up to catch them.
Well yes, I understand you probably couldn’t use faster-than-light neutrinos from the future (FTLNFTFs) to effect changes in the year 1470 any more easily or precisely than, say, creating an equivalent neutrino burst to 10^10^9999 galaxies going supernova simultaneously one AU from Earth, presumably resulting in the planet melting or some such thing, I don’t know.
However, elsewhere in this thread I suggested a method that takes advantage of a system that already exists and is set up to detect neutrinos (admittedly not FTLNFTFs specifically, though I don’t know why that should matter). I still don’t see exactly what prevents Eliezer_2831 from fiddling around with MINOS’s or CERN’s observations in a causality-violating but not-immediately-obvious manner.
Well, it’s just… how could you tell? I mean, maybe the angel that told Colombo to sail west was a time-travelling hologram sent to avert the Tlaxcalan conquest of Europe.
We obviously can’t with certainty. But we can say it is highly unlikely. The universe looks to us like it has a consistent causal foundation rather than being riddled with arbitrary causality violations. That doesn’t make isolated interventions impossible, just unlikely.
I still don’t see exactly what prevents Eliezer_2831 from fiddling around with MINOS’s or CERN’s observations in a causality-violating but not-immediately-obvious manner.
Overwhelming practical difficulties. To get over 800 years of time travel in one hop using neutrinos going very, very slightly faster than light the neutrinos would have to be shot from a long, long way away. Getting a long, long, way away takes time and is only useful if you are traveling close enough to the speed of light that on the return trip the neutrinos gain more time than what you spent travelling. Eliezer_2831 would end up on the other side of the universe somewhere and the energy required to shoot enough neutrinos to communicate over that much distance would be enormous. The scope puts me in mind of the Tenth Doctor: “And it takes a lot of power to send this projection— I’m in orbit around a supernova. [smiling weakly] I’m burning up a sun just to say goodbye.”
I’m not sure if that scenario is more or less difficult than the remote neutrino manufacturing scenario. The engineering doesn’t sound easy but once it is done once any time before heat death of the universe you just win. You can send anything back to (almost) any time.
In the context of almost every proposed causality violation mechanism I’ve seen seriously discussed, it really is can’t, not won’t. Wormholes aren’t the only example. Tipler Cylinders for example don’t allow time travel prior to the point when they started rotating. Godel’s rotating universe has similar restrictions. Is there some time travel proposal I’m missing?
I agree that when considering anthropic issues won’t becomes potentially relevant if we had any idea that time travel could potentially allow travel prior to the existence of the device in question. In that case, I’d actually argue in the other direction: if such machines could exist, I’d expect to see massive signs of such interference in the past.
In the context of almost every proposed causality violation mechanism I’ve seen seriously discussed, it really is can’t, not won’t.
There are plenty of mechanisms in which can’t applies. There are others which don’t have that limitation. I don’t even want to touch what qualifies as ‘seriously discussed’. I’m really not up to date with which kinds of time travel are high status.
Ignore status issues. Instead focus on time travel mechanisms that don’t violate SR. Are there any such mechanisms which allow such violation before the time travel device has been constructed? I’m not aware of any.
I’m pretty sure—not totally sure, I’m perfectly willing to be corrected by anyone with more knowledge of the physics than me, but still, pretty sure—that the stated objection would not preclude The Future from sending back time-travelling neutrinos to, say, the Main Injector Neutrino Oscillation Search in a pattern that spells out the Morse code for T-E-L-L—E-Y—D-N-M-W-T, possibly even in such a way that they wouldn’t figure out the code until after CERN’s results were published.
This would be really difficult. The primary problem is that neutrinos don’t interact with most things, so to send a signal you’d need to send a massive burst of neutrinos to the point where we should expect it to show up on other neutrino detectors also. The only plausible way this might work is if someone used a system at CERN, maybe the OPERA system itself in a highly improved and calibrated form to send the neutrinos back.
Although if neutrinos can go back in time then so much of physics may be wrong that this sort of speculation is likely to be extremely unlikely to be at all helpful. This is almost like going to an 17th century physicist and asking them to speculate what things would be like if nothing could travel faster than the speed of light.
Yeah, see, I’m not betting against random cool new physics, I wouldn’t offer odds like that on there not being a Higgs boson, I’m betting on the local structure of causality. Could I be wrong? Yes, but if I have to pay out that entire bet, it won’t be the most interesting thing that happened to me that day.
How confident am I of this? Not just confident to offer to bet at 99-to-1 odds. Confident enough to say...
“Well, that was an easy, risk-free $202.”
Or to put it even more plainly:
“You turned into a cat! A SMALL cat! You violated Conservation of Energy! That’s not just an arbitrary rule, it’s implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signaling! And cats are COMPLICATED! A human mind can’t just visualize a whole cat’s anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?”
McGonagall’s lips were twitching harder now. “Magic.”
“Magic isn’t enough to do that! You’d have to be a god!”
The consequence of the FTL neutrinos CERN thinks they found at six sigma significance is not the breakdown of causality. You can have faster than light neutrinos without backwards propagation of information. This is not the end of normality, but a new normality, one where Lorentz invariance is broken. This would mean that there is a universal reference class that trumps but doesn’t destroy relativity. If anything, a universal reference class seems like a stronger causal structure than relativity.
I’m at around 10% odds on this whole thing seeming like weak consensus in 3 years and something like >80% odds (on a very very long bet) that locally possible FTL information travel is possible outside of the local structure of causality.
It’s not about transmitting information into the past—it’s about the locality of causality. Consider Judea Pearl’s classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of “causality” is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS—the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.
Right now, we have a physics where (if you don’t believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.
In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around, I can predict the future of that point one nanosecond later. And so on to the continuous limit: the causal factors determining a point’s infinitesimal future are screened off by knowing an infinitesimal spatial neighborhood of its ancestors.
This is the obvious analogue of Judea Pearl’s Causality for continuous time; instead of discrete causal graphs, you have a continuous metric of relatedness (space) which shrinks to an infinitesimal neighborhood as you consider infinitesimal causal succession (time).
This, in turn, implies the existence of a fundamental constant describing how the neighborhood of causally related space shrinks as time diminishes, to preserve the locality of causal relatedness in a continuous physics.
This constant is, obviously, c.
I’ve never read this anywhere else, by the way. It clearly isn’t universally understood, because if all physicists understood the universe in these terms, none of them would believe in a “collapse of the wavefunction”, which is not locally related in the configuration space. I would be surprised neither to find that the above statement is original, nor that it has been said before.
I am attempting to bet that physics still looks like this after the dust settles. It’s a stronger condition than global noncircularity of time—not all models with globally noncircular time have local causality.
If violating Lorentz invariance means that physics no longer looks like this, then I will bet at 99-to-1 odds against violations of Lorentz invariance. But I can’t make out from the Wikipedia pages whether Lorentz violations mean the end of local causality (which I’ll bet against) or if they’re random weird physics (which I won’t bet against).
I am also willing to bet that the fundamental constant c as it appears in multiple physical equations is the constant of time/space locality, i.e., the constant we know as c is fundamentally the shrinking constant by which an infinitesimal neighborhood in space causally determines an infinitesimal future in time. I am willing to lose the bet if there’s still locality but the real size of the infinitesimal spatial neighborhood goes as 2c rather than c (though I’m not actually sure whether that statement is even meaningful in a Lorentz-invariant universe) and therefore you can use neutrinos to transmit information at up to twice the speed of light, but no faster. The clues saying that c is the fundamental constant that we should expect to see in any continuous analogue of a locally causal universe, are strong enough that I’ll bet on them at 99-to-1 odds.
What I can’t make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly.
I would happily amend the bet to be annulled in the case that any more complicated definition of c is adopted by which there is still a constant of time/space locality in causal propagation, but it makes photons and neutrinos move at different speeds.
The trouble is that physicists don’t read books like Causality and don’t understand local causality as part of the apparent character of physical law, which is why some of them still believe in the “collapse of the wavefunction”—it would be an exceptional physicist whom we could simply ask whether the Standard Model Extension preserves locally continuous causality with c as the neighborhood-size constant.
This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.
And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I’m not trying to do that, just read off the style of the existing model). But anyway, I’d pay $20,000 to find out I’m that wrong—what I want to eliminate is the possibility of paying $20,000 to find out I’m right.
You need to distinguish different notions of local causality. SR implies in most forms a very strong form of local causality that you seem to be using here. But it is important to note that very well behaved systems can not obey this, and it isn’t just weird systems. For example, a purely Newtonian universe won’t obey this sort of strong local causality. A particle from far away can have arbitrarily high velocity and smack into the region we care about. The fact that such well behaved systems are ok with weaker forms of local causality suggests that we shouldn’t assign such importance to local causality.
What I can’t make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly
This isn’t a well-defined question. It depends very much on what sort of Lorentz violation you are talking about. Imagine that you are working in a Newtonian framework and someone asks “well, if gravity doesn’t always decrease at a 1/r^2 rate, will the three body problem still be hard?” The problem is that the set of systems which violate Lorentz is so large that saying this isn’t that helpful.
The trouble is that physicists don’t read books like Causality and don’t understand local causality as part of the apparent character of physical law,
The vast majority of physicists aren’t thinking about how to do things that replace the fundamental laws with other fundamental more unifying laws. The everday work of physicists is stuff like trying to measure the rest mass of elementary particles more precisely, or being better able to predict the properties of pure water near a transition state, or trying to better model the behavior of high temperature superconductors. They don’t have reason to think about these issues. But even if they did, they probably wouldn’t take these sorts of ideas as seriously as you do. Among other problems, strong local causality is something which appeals to a set of intuitions. And humans are notoriously bad at intuiting how the universe behaves. We evolved to get mates and avoid tigers, not to be able to intuit the details of the causal structure of reality.
It clearly isn’t universally understood, because if all physicists understood the universe in these terms, none of them would believe in a “collapse of the wavefunction”, which is not locally related in the configuration space.
And just like that, Many-Worlds clicked for me. It’s now incredibly obvious just how preposterous waveform collapse is, and this new intuitive mental model clears up a lot of the frustrating sticking points I was having with QM. C as the speed limit of information in the universe and the notion of local causality have all been a native part of my view of the universe for a while, but it wasn’t until that sentence that I connected them to decoherence.
Edit: Wow, a lot more things just clicked, including quantum suicide. My priority of cyronics just shot up several orders of magnitude, and I’m going to sign up once I’ve graduated and start bringing in income. Eliezer, if you have never seen The Prestige, I recommend you go and watch it. It provides a nice allegory for MW/quantum suicide that I think a lot of lay-people will be able to connect to easily. Could help when you’re explaining things.
Edit2: Just read your cyronics 101, and while the RIGHT NOW message punctured through my akrasia, I looked it up and even the $310/yr is not affordable right now. However, it’s far more affordable than I had thought and in a couple months I should be in a position where this becomes sustainably possible.
By the way, thank you. You probably know this on an intuitive level, but it should be good to hear that your work may very well be saving lives.
Username, you’re having a small conversion experience here, going from “causality is local” to “wavefunction collapse is preposterous” to “I understand quantum suicide” to “I’d better sign up for cryonics once I graduate” in rapid succession. It’s a shame we can’t freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study.
This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn’t have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-)
ETA: Some more remarks:
Locality does not imply a maximum speed. Locality just means that causes don’t jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the “next moment”; some influence moving at a googol light-years per second might come crashing in upon us.)
Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let’s say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the “points” themselves are whole configurations. “Locality” here refers to similarity between whole configurations. It means that the amplitude for a whole configuration is only immediately influenced by the amplitudes for infinitesimally different whole configurations.
Suppose we’re talking about a configuration in which there are two atoms, A and B, separated by a light-year. The amplitude for that configuration (in an evolving wavefunction) will be affected by the amplitude for a configuration which differs slightly at atom A, and also by the amplitude for a configuration which differs slightly at atom B, a light-year away from A. This is where the indirect nonlocality of QM comes from—if you think of QM in terms of amplitude flows in configuration space: you are attaching single amplitudes to extended objects—arbitrarily large configurations—and amplitude changes can come from very different “directions” in configuration space.
Eliezer also talks about amplitudes for subconfigurations. He wants to be able to say that what happens at a point only depends on its immediate environment. But if you want to talk like this, you have to retreat from talking about specific configurations, and instead talk about regions of space, and the quantum state of a “region of space”, which will associate an amplitude with every possible subconfiguration confined to that region.
This is an important consideration for MWI, evaluated from a relativistic perspective, because relativity implies that a “configuration” is not a fundamental element of reality. A configuration is based on a particular slicing of space-time into equal-time hypersurfaces, and in relativity, no such slicing is to be preferred as ontologically superior to all others. Ultimately that means that only space-time points, and the relations between them (spacelike, lightlike, timelike) are absolute; assembling sets of points into spacelike hypersurfaces is picking a particular reference frame.
This causes considerable problems if you want to reify quantum wavefunctions—treat them as reality, rather than as constructs akin to probability distributions—because (for any region of space bigger than a point) they are always based on a particular hypersurface, and therefore a particular notion of simultaneity; so to reify the wavefunction is to say that the reference frame in which it is defined is ontologically preferred. So then you could say, all right, we’ll just talk about wavefunctions based at a point. But building up an extended wavefunction from just local information is not a simple matter. The extended wavefunction will contain entanglement but the local information says nothing about entanglement. So the entanglement has to come from how you “combine” the wavefunctions based at points. Potentially, for any n points that are spacelike with respect to each other, there will need to be “entanglement information” on how to assemble them as part of a wavefunction for configurations.
I don’t know where that line of thought takes you. But in ordinary Copenhagen QM, applied to QFT, this just doesn’t even come up, because you treat space-time, and particular events in space-time, as the reality, and wavefunctions, superpositions, sums over histories, etc, as just a method of obtaining probabilities about reality. Copenhagen is unsatisfactory as an ontological picture because it glosses over the question of why QM works and of what happens in between one “definite event” and the next. But the attempt to go to the opposite interpretive pole, and say “OK, the wavefunction IS reality” is not a simple answer to your philosophical problems either; instead, it’s the beginning of a whole new set of problems, including, how do you reify wavefunctions without running foul of relativity?
Returning to Eliezer’s argument, which purports to derive the existence of a causal speed-limit from a postulate of “locality”: my critique is as informal and inexact as his argument, but perhaps I’ve at least shown that this is not as simple a matter as it may appear to the uninformed reader. There are formidable conceptual problems involved just in getting started with such an argument. Eliezer has the essentials needed to think about these topics rigorously, but he’s passing over crucial details, and he may thereby be overlooking a hole in his intuitions. In mathematics, you may start out with a reasonable belief that certain objects always behave in a certain way, but then when you examine specifics, you discover a class of cases which work in a way you didn’t anticipate.
What if you have a field theory with no speed limit, but in which significant and ultra-fast-moving influences are very rare; so that you have an effective “locality” (in Eliezer’s sense), with a long tail of very rare disruptions? Would Eliezer consider that a disproof of his intuitive idea, or an exception which didn’t sully the correctness of the individual insight? I have no idea. But I can say that the literature of physics is full of bogus derivations of special relativity, the Born rule, the three-dimensionality of space, etc. This derivation of “c” from Pearlian causal locality certainly has the ingredients necessary for such a bogus derivation. The way to make it non-bogus is to make it deductively valid, rather than just intuitive. This means that you have to identify and spell out all the assumptions required for the deduction.
This may or may not be the result of day 2 of modafinil. :) I don’t think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point.
Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I’m not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman’s thinking about balls. I’m going to have to do a lot more reading as my QM is very shakey, and I want to shore this up. It will also probably take a while until this way of thinking becomes the natural way I see the universe. But it all lines up, makes sense, and aligns with what people smarter than me are saying, so I’m assigning a high probability that it’s the correct conclusion.
An upper speed limit doesn’t matter—all that matters is that things are not instantaneous for locality to be valid.
A conversion experience is a very appropriate term for what I’m going through. I’m having very mixed emotions right now. A lot of my thoughts just clarified, which simply feels good. I’m grateful, because I live in an era where this is possible and because I was born intelligent enough to understand. Sad, because I know that most if not all of the people I know will never understand, and never sign up for cyronics. But I’m also ecstatic, because I’ve just discovered the cheat code to the universe, and it works.
I just made a long-winded addition to my comment, expanding on some of the gaps in Eliezer’s reasoning.
I’m also ecstatic, because I’ve just discovered the cheat code to the universe, and it works.
Well, you’re certainly not backing down and saying, hang on, is this just an illusory high? It almost seems inappropriate to dump cold water on you precisely when you’re having your satori—though it’s interesting from an experimental perspective. I’ve never had the opportunity to meddle with someone who thinks they are receiving enlightenment, right at the moment when it’s happening; unless I count myself.
From my perspective, QM is far more likely to be derived from ’t Hooft’s holographic determinism, and the idea of personal identity as a fungible pattern is just (in historical terms) a fad resulting from the incomplete state of our science, so I certainly regard your excitement as based mostly on an illusion. It’s good that you’re having exciting ideas and new thoughts, and perhaps it’s even appropriate to will yourself to believe them, because that’s a way of testing them against the whole of the rest of your experience.
But I still find it interesting how it is that people come to think that they know something new, when they don’t actually know it. How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
From experiences back when I was young and religious, I’ve learned to recognize moments of satori as not much more than a high (have probably had 2-3 prior). I enjoy the experience, but I’ve learned skepticism and try not to place too much weight on them. I was more describing the causes for my emotional states rather than proclaiming new beliefs. But to be completely honest, for several minutes I was convinced that I had found the tree of life, so I won’t completely downplay what I wrote.
How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
I suspect it has evopsych roots relating to confidence, the measured benefits of a life with purpose, and good-enough knowledge.
Reading ‘t Hooft’s paper I could understand what he was saying, but I’m realizing that the physics is out of my current depth. And I understand the argument you explained about the flaws in spatial (as opposed to configuration) locality. I’ll update my statement that ‘Many-Worlds is intuitively correct’ to ‘Copenhagen is intuitively wrong,’ which I suppose is where my original logic should have taken me—I just didn’t consider strong MWI alternatives. Determinism kills quantum suicide, so I’ll have to move down the priority of cyronics (though the ‘if MWI then quantum suicide then cyronics’ logic still holds and I still think cyronics is a good idea. I do love me a good hedge bet). But like I said, I’m not at all qualified to start assigning likelyhoods here between different QM origins. This requires more study.
I don’t see the issue with consciousness as being represented by the pattern of our brains rather than the physicality of it. You are right that we may eventually find that we can never look at a brain with high enough resolution to emulate it. But based on cases of people entering a several-hour freeze before being revived, the consciousness mechanism is obviously robust and I say this points towards it being an engineering problem of getting everything correct enough. The viability of putting it on a computer once you have a high enough resolution scan is not an issue—worst case scenario you start from something like QM and work up. Again this assumes a level of the brain’s robustness (rounding errors shouldn’t crash the mind), but I would call that experimentally proven in today’s humans.
That might preserve before-and-after. It wouldn’t preserve the locality of causality. Once you throw away c, you might need to take the entire frame of the universe into account when calculating the temporal successor at any given point, rather than just the immediate spatial neighborhood.
There could be some other special velocity than c. Like, imagine there’s some special reference frame in which you can send superluminal signals at exactly 2.71828 c in any direction. In other reference frames, this special velocity depends on which direction you send the signal. Lorentz invariance is broken. But the only implication for local causality is that you need to make your bubble 2.71828 times bigger.
I’ll take bets at 99-to-1 odds against any information propagating faster than c. Note that this is not a bet for the results being methodologically flawed in any particular way, though I would indeed guess some simple flaw. It is just a bet that when the dust settles, it will not be possible to send signals at a superluminal velocity using whatever is going on—that there will be no propagation of any cause-and-effect relation at faster than lightspeed.
My real probability is lower, but I think that anyone who’d bet against me at 999-to-1 will probably also bet at 99-to-1, so 99-to-1 is all I’m offering.
I will not accept more than $20,000 total of such bets.
I’ll take that bet, for a single pound on my part against 99 from Eliezer.
(explanation: I have a 98-2 bet with my father against the superluminal information propagation being true, so this sets up a nice little arbitrage).
Is that c the speed of light in vacuum or c the constant in special relativity?
c is the constant as it appears in fundamental physical equations, relativistic or quantum. Anything slowing down the propagation of photons through an apparent vacuum (such as interaction with dark matter) which did not affect, for example, the mass-energy equivalence of E=MC2, would not win the bet.
If Kevin doesn’t go through with taking that bet for $202, I’ll take it for $101.
I suggest clarifying the bet to say “information propagating faster than c as c is defined at the time of this bet”. With that clarification, I can pay up front in cash for $202 as soon as possible.
There are many definitions of c—it appears as a constant in many different physical equations. Right now, all of these definitions are consistent. If you have a new physics where all these definitions remain consistent and you can still transmit information faster than c, then certainly I have lost the bet. Other cases would be harder to settle—I did state that weird physics along the lines of “this is why photons are slowed down in a vacuum by dark matter, but neutrinos aren’t slowed” wouldn’t win the bet.
Actually, what is the worst that could happen? It’s not [the structure of the universe is destabilized by the breakdown of causality], because that would have already happened if it were going to.
The obvious one would be [Eliezer loses $20,000], except that would only occur in the event that it were possible to violate causality, in which case he would presumably arrange to prevent his past self from making the bet in the first place, yeah? So really, it’s a win-win.
Unless one of the people betting against him is doing so because ve received a mysterious parchment on which was written, in ver own hand, “MESS WITH TIME.”
If there are ways to violate causality they are likely restrictive enough that we can’t use them to violate causality prior to when we knew about the methods (roughly). This is true for most proposed causality violating mechanisms. For example, you might be able to violate causality with a wormhole, but you can’t do it to any point in spacetime prior to the existence of the wormhole.
In general, if there are causality violating mechanisms we should expect that they can’t violate causality so severely as to make the past become radically altered since we just don’t see that. It is conceivable that such manipulation is possible but that once we find an effective method of violating causality we will be quickly wiped out (possibly by bad things related to the method itself) but this seems unlikely even assuming one already has a causality violating mechanism.
Mostly agree. Would downgrade to “can’t or won’t”. Apart from a little more completeness the difference makes a difference to anthropic considerations.
Does it even make sense to say “won’t”, or for that matter bring up anthropic considerations, in reference to causality violation?
This is a serious question, I don’t know the answer.
I’m not sure. If a universe allows sufficient causality violation then it may be that it will be too unstable for observers to arise in that universe. But I’m not sure about that. This may be causality chauvinism.
(I feel like there’s a joke to be made here, something to do with “causality chauvinism”, “causality violation”, “too unstable for observers to arise”, the relative “looseness” of time travel rules, maybe also the “Big Bang”… it’s on the tip of my brain… nah, I got nothing.)
Yes. (Leave out the anthropics, when that makes sense to bring up is complicated.)
Most of the reason for saying:
… are somewhat related to “causality doesn’t appear to be violated”. If (counterfactually) causality can be violated then it seems like it probably hasn’t happened yet. This makes it a lot more likely that causality violations (like wormholes and magic) that are discovered in the future will not affect things before their discovery. This includes the set of (im)possible worlds in which prior-to-the-magic times cannot be interfered with and also some other (im)possible worlds in which it is possible but doesn’t happen because it is hard.
An example would be faster-than-light neutrinos. It would be really damn hard to influence the past significantly with such neutrinos with nothing set up to catch them. It would be much easier to set up a machine to receive messages from the future.
It may be worth noting that “causality violation” does not imply “complete causality meltdown”. The latter would definitely make “won’t” rather useless.
Well, it’s just… how could you tell? I mean, maybe the angel that told Colombo to sail west was a time-travelling hologram sent to avert the Tlaxcalan conquest of Europe.
Well yes, I understand you probably couldn’t use faster-than-light neutrinos from the future (FTLNFTFs) to effect changes in the year 1470 any more easily or precisely than, say, creating an equivalent neutrino burst to 10^10^9999 galaxies going supernova simultaneously one AU from Earth, presumably resulting in the planet melting or some such thing, I don’t know.
However, elsewhere in this thread I suggested a method that takes advantage of a system that already exists and is set up to detect neutrinos (admittedly not FTLNFTFs specifically, though I don’t know why that should matter). I still don’t see exactly what prevents Eliezer_2831 from fiddling around with MINOS’s or CERN’s observations in a causality-violating but not-immediately-obvious manner.
Other than, you know, basic human decency.
We obviously can’t with certainty. But we can say it is highly unlikely. The universe looks to us like it has a consistent causal foundation rather than being riddled with arbitrary causality violations. That doesn’t make isolated interventions impossible, just unlikely.
Overwhelming practical difficulties. To get over 800 years of time travel in one hop using neutrinos going very, very slightly faster than light the neutrinos would have to be shot from a long, long way away. Getting a long, long, way away takes time and is only useful if you are traveling close enough to the speed of light that on the return trip the neutrinos gain more time than what you spent travelling. Eliezer_2831 would end up on the other side of the universe somewhere and the energy required to shoot enough neutrinos to communicate over that much distance would be enormous. The scope puts me in mind of the Tenth Doctor: “And it takes a lot of power to send this projection— I’m in orbit around a supernova. [smiling weakly] I’m burning up a sun just to say goodbye.”
I’m not sure if that scenario is more or less difficult than the remote neutrino manufacturing scenario. The engineering doesn’t sound easy but once it is done once any time before heat death of the universe you just win. You can send anything back to (almost) any time.
Unless you’re fighting Photino Birds.
But that’s pretty unlikely, yeah.
That sounds like it’s a reference to something awesome. Is it?
Fairly awesome, I’d say.
In the context of almost every proposed causality violation mechanism I’ve seen seriously discussed, it really is can’t, not won’t. Wormholes aren’t the only example. Tipler Cylinders for example don’t allow time travel prior to the point when they started rotating. Godel’s rotating universe has similar restrictions. Is there some time travel proposal I’m missing?
I agree that when considering anthropic issues won’t becomes potentially relevant if we had any idea that time travel could potentially allow travel prior to the existence of the device in question. In that case, I’d actually argue in the other direction: if such machines could exist, I’d expect to see massive signs of such interference in the past.
There are plenty of mechanisms in which can’t applies. There are others which don’t have that limitation. I don’t even want to touch what qualifies as ‘seriously discussed’. I’m really not up to date with which kinds of time travel are high status.
Ignore status issues. Instead focus on time travel mechanisms that don’t violate SR. Are there any such mechanisms which allow such violation before the time travel device has been constructed? I’m not aware of any.
Alcubierre drives.
I’m pretty sure—not totally sure, I’m perfectly willing to be corrected by anyone with more knowledge of the physics than me, but still, pretty sure—that the stated objection would not preclude The Future from sending back time-travelling neutrinos to, say, the Main Injector Neutrino Oscillation Search in a pattern that spells out the Morse code for T-E-L-L—E-Y—D-N-M-W-T, possibly even in such a way that they wouldn’t figure out the code until after CERN’s results were published.
This would be really difficult. The primary problem is that neutrinos don’t interact with most things, so to send a signal you’d need to send a massive burst of neutrinos to the point where we should expect it to show up on other neutrino detectors also. The only plausible way this might work is if someone used a system at CERN, maybe the OPERA system itself in a highly improved and calibrated form to send the neutrinos back.
Although if neutrinos can go back in time then so much of physics may be wrong that this sort of speculation is likely to be extremely unlikely to be at all helpful. This is almost like going to an 17th century physicist and asking them to speculate what things would be like if nothing could travel faster than the speed of light.
Yeah, see, I’m not betting against random cool new physics, I wouldn’t offer odds like that on there not being a Higgs boson, I’m betting on the local structure of causality. Could I be wrong? Yes, but if I have to pay out that entire bet, it won’t be the most interesting thing that happened to me that day.
How confident am I of this? Not just confident to offer to bet at 99-to-1 odds. Confident enough to say...
“Well, that was an easy, risk-free $202.”
Or to put it even more plainly:
The consequence of the FTL neutrinos CERN thinks they found at six sigma significance is not the breakdown of causality. You can have faster than light neutrinos without backwards propagation of information. This is not the end of normality, but a new normality, one where Lorentz invariance is broken. This would mean that there is a universal reference class that trumps but doesn’t destroy relativity. If anything, a universal reference class seems like a stronger causal structure than relativity.
This whole thing would be so normal, that there’s a pre-existing effective field theory called the Standard Model Extension. http://en.wikipedia.org/wiki/Standard-Model_Extension
http://en.wikipedia.org/wiki/Lorentz_transformation
http://en.wikipedia.org/wiki/Lorentz_covariance
http://en.wikipedia.org/wiki/Lorentz-violating_neutrino_oscillations
is suggested WIkipedia skimming, http://blogs.discovermagazine.com/cosmicvariance/2005/10/25/lorentz-invariance-and-you/ is what gave me the intuition of the universal inertial frame.
I’m at around 10% odds on this whole thing seeming like weak consensus in 3 years and something like >80% odds (on a very very long bet) that locally possible FTL information travel is possible outside of the local structure of causality.
It’s not about transmitting information into the past—it’s about the locality of causality. Consider Judea Pearl’s classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of “causality” is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS—the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.
Right now, we have a physics where (if you don’t believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.
In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around, I can predict the future of that point one nanosecond later. And so on to the continuous limit: the causal factors determining a point’s infinitesimal future are screened off by knowing an infinitesimal spatial neighborhood of its ancestors.
This is the obvious analogue of Judea Pearl’s Causality for continuous time; instead of discrete causal graphs, you have a continuous metric of relatedness (space) which shrinks to an infinitesimal neighborhood as you consider infinitesimal causal succession (time).
This, in turn, implies the existence of a fundamental constant describing how the neighborhood of causally related space shrinks as time diminishes, to preserve the locality of causal relatedness in a continuous physics.
This constant is, obviously, c.
I’ve never read this anywhere else, by the way. It clearly isn’t universally understood, because if all physicists understood the universe in these terms, none of them would believe in a “collapse of the wavefunction”, which is not locally related in the configuration space. I would be surprised neither to find that the above statement is original, nor that it has been said before.
I am attempting to bet that physics still looks like this after the dust settles. It’s a stronger condition than global noncircularity of time—not all models with globally noncircular time have local causality.
If violating Lorentz invariance means that physics no longer looks like this, then I will bet at 99-to-1 odds against violations of Lorentz invariance. But I can’t make out from the Wikipedia pages whether Lorentz violations mean the end of local causality (which I’ll bet against) or if they’re random weird physics (which I won’t bet against).
I am also willing to bet that the fundamental constant c as it appears in multiple physical equations is the constant of time/space locality, i.e., the constant we know as c is fundamentally the shrinking constant by which an infinitesimal neighborhood in space causally determines an infinitesimal future in time. I am willing to lose the bet if there’s still locality but the real size of the infinitesimal spatial neighborhood goes as 2c rather than c (though I’m not actually sure whether that statement is even meaningful in a Lorentz-invariant universe) and therefore you can use neutrinos to transmit information at up to twice the speed of light, but no faster. The clues saying that c is the fundamental constant that we should expect to see in any continuous analogue of a locally causal universe, are strong enough that I’ll bet on them at 99-to-1 odds.
What I can’t make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly.
I would happily amend the bet to be annulled in the case that any more complicated definition of c is adopted by which there is still a constant of time/space locality in causal propagation, but it makes photons and neutrinos move at different speeds.
The trouble is that physicists don’t read books like Causality and don’t understand local causality as part of the apparent character of physical law, which is why some of them still believe in the “collapse of the wavefunction”—it would be an exceptional physicist whom we could simply ask whether the Standard Model Extension preserves locally continuous causality with c as the neighborhood-size constant.
This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.
And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I’m not trying to do that, just read off the style of the existing model). But anyway, I’d pay $20,000 to find out I’m that wrong—what I want to eliminate is the possibility of paying $20,000 to find out I’m right.
You need to distinguish different notions of local causality. SR implies in most forms a very strong form of local causality that you seem to be using here. But it is important to note that very well behaved systems can not obey this, and it isn’t just weird systems. For example, a purely Newtonian universe won’t obey this sort of strong local causality. A particle from far away can have arbitrarily high velocity and smack into the region we care about. The fact that such well behaved systems are ok with weaker forms of local causality suggests that we shouldn’t assign such importance to local causality.
This isn’t a well-defined question. It depends very much on what sort of Lorentz violation you are talking about. Imagine that you are working in a Newtonian framework and someone asks “well, if gravity doesn’t always decrease at a 1/r^2 rate, will the three body problem still be hard?” The problem is that the set of systems which violate Lorentz is so large that saying this isn’t that helpful.
The vast majority of physicists aren’t thinking about how to do things that replace the fundamental laws with other fundamental more unifying laws. The everday work of physicists is stuff like trying to measure the rest mass of elementary particles more precisely, or being better able to predict the properties of pure water near a transition state, or trying to better model the behavior of high temperature superconductors. They don’t have reason to think about these issues. But even if they did, they probably wouldn’t take these sorts of ideas as seriously as you do. Among other problems, strong local causality is something which appeals to a set of intuitions. And humans are notoriously bad at intuiting how the universe behaves. We evolved to get mates and avoid tigers, not to be able to intuit the details of the causal structure of reality.
And just like that, Many-Worlds clicked for me. It’s now incredibly obvious just how preposterous waveform collapse is, and this new intuitive mental model clears up a lot of the frustrating sticking points I was having with QM. C as the speed limit of information in the universe and the notion of local causality have all been a native part of my view of the universe for a while, but it wasn’t until that sentence that I connected them to decoherence.
Edit: Wow, a lot more things just clicked, including quantum suicide. My priority of cyronics just shot up several orders of magnitude, and I’m going to sign up once I’ve graduated and start bringing in income.
Eliezer, if you have never seen The Prestige, I recommend you go and watch it. It provides a nice allegory for MW/quantum suicide that I think a lot of lay-people will be able to connect to easily. Could help when you’re explaining things.
Edit2: Just read your cyronics 101, and while the RIGHT NOW message punctured through my akrasia, I looked it up and even the $310/yr is not affordable right now. However, it’s far more affordable than I had thought and in a couple months I should be in a position where this becomes sustainably possible.
By the way, thank you. You probably know this on an intuitive level, but it should be good to hear that your work may very well be saving lives.
Username, you’re having a small conversion experience here, going from “causality is local” to “wavefunction collapse is preposterous” to “I understand quantum suicide” to “I’d better sign up for cryonics once I graduate” in rapid succession. It’s a shame we can’t freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study.
This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn’t have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-)
ETA: Some more remarks:
Locality does not imply a maximum speed. Locality just means that causes don’t jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the “next moment”; some influence moving at a googol light-years per second might come crashing in upon us.)
Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let’s say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the “points” themselves are whole configurations. “Locality” here refers to similarity between whole configurations. It means that the amplitude for a whole configuration is only immediately influenced by the amplitudes for infinitesimally different whole configurations.
Suppose we’re talking about a configuration in which there are two atoms, A and B, separated by a light-year. The amplitude for that configuration (in an evolving wavefunction) will be affected by the amplitude for a configuration which differs slightly at atom A, and also by the amplitude for a configuration which differs slightly at atom B, a light-year away from A. This is where the indirect nonlocality of QM comes from—if you think of QM in terms of amplitude flows in configuration space: you are attaching single amplitudes to extended objects—arbitrarily large configurations—and amplitude changes can come from very different “directions” in configuration space.
Eliezer also talks about amplitudes for subconfigurations. He wants to be able to say that what happens at a point only depends on its immediate environment. But if you want to talk like this, you have to retreat from talking about specific configurations, and instead talk about regions of space, and the quantum state of a “region of space”, which will associate an amplitude with every possible subconfiguration confined to that region.
This is an important consideration for MWI, evaluated from a relativistic perspective, because relativity implies that a “configuration” is not a fundamental element of reality. A configuration is based on a particular slicing of space-time into equal-time hypersurfaces, and in relativity, no such slicing is to be preferred as ontologically superior to all others. Ultimately that means that only space-time points, and the relations between them (spacelike, lightlike, timelike) are absolute; assembling sets of points into spacelike hypersurfaces is picking a particular reference frame.
This causes considerable problems if you want to reify quantum wavefunctions—treat them as reality, rather than as constructs akin to probability distributions—because (for any region of space bigger than a point) they are always based on a particular hypersurface, and therefore a particular notion of simultaneity; so to reify the wavefunction is to say that the reference frame in which it is defined is ontologically preferred. So then you could say, all right, we’ll just talk about wavefunctions based at a point. But building up an extended wavefunction from just local information is not a simple matter. The extended wavefunction will contain entanglement but the local information says nothing about entanglement. So the entanglement has to come from how you “combine” the wavefunctions based at points. Potentially, for any n points that are spacelike with respect to each other, there will need to be “entanglement information” on how to assemble them as part of a wavefunction for configurations.
I don’t know where that line of thought takes you. But in ordinary Copenhagen QM, applied to QFT, this just doesn’t even come up, because you treat space-time, and particular events in space-time, as the reality, and wavefunctions, superpositions, sums over histories, etc, as just a method of obtaining probabilities about reality. Copenhagen is unsatisfactory as an ontological picture because it glosses over the question of why QM works and of what happens in between one “definite event” and the next. But the attempt to go to the opposite interpretive pole, and say “OK, the wavefunction IS reality” is not a simple answer to your philosophical problems either; instead, it’s the beginning of a whole new set of problems, including, how do you reify wavefunctions without running foul of relativity?
Returning to Eliezer’s argument, which purports to derive the existence of a causal speed-limit from a postulate of “locality”: my critique is as informal and inexact as his argument, but perhaps I’ve at least shown that this is not as simple a matter as it may appear to the uninformed reader. There are formidable conceptual problems involved just in getting started with such an argument. Eliezer has the essentials needed to think about these topics rigorously, but he’s passing over crucial details, and he may thereby be overlooking a hole in his intuitions. In mathematics, you may start out with a reasonable belief that certain objects always behave in a certain way, but then when you examine specifics, you discover a class of cases which work in a way you didn’t anticipate.
What if you have a field theory with no speed limit, but in which significant and ultra-fast-moving influences are very rare; so that you have an effective “locality” (in Eliezer’s sense), with a long tail of very rare disruptions? Would Eliezer consider that a disproof of his intuitive idea, or an exception which didn’t sully the correctness of the individual insight? I have no idea. But I can say that the literature of physics is full of bogus derivations of special relativity, the Born rule, the three-dimensionality of space, etc. This derivation of “c” from Pearlian causal locality certainly has the ingredients necessary for such a bogus derivation. The way to make it non-bogus is to make it deductively valid, rather than just intuitive. This means that you have to identify and spell out all the assumptions required for the deduction.
This may or may not be the result of day 2 of modafinil. :) I don’t think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point.
Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I’m not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman’s thinking about balls. I’m going to have to do a lot more reading as my QM is very shakey, and I want to shore this up. It will also probably take a while until this way of thinking becomes the natural way I see the universe. But it all lines up, makes sense, and aligns with what people smarter than me are saying, so I’m assigning a high probability that it’s the correct conclusion.
An upper speed limit doesn’t matter—all that matters is that things are not instantaneous for locality to be valid.
A conversion experience is a very appropriate term for what I’m going through. I’m having very mixed emotions right now. A lot of my thoughts just clarified, which simply feels good. I’m grateful, because I live in an era where this is possible and because I was born intelligent enough to understand. Sad, because I know that most if not all of the people I know will never understand, and never sign up for cyronics. But I’m also ecstatic, because I’ve just discovered the cheat code to the universe, and it works.
I just made a long-winded addition to my comment, expanding on some of the gaps in Eliezer’s reasoning.
Well, you’re certainly not backing down and saying, hang on, is this just an illusory high? It almost seems inappropriate to dump cold water on you precisely when you’re having your satori—though it’s interesting from an experimental perspective. I’ve never had the opportunity to meddle with someone who thinks they are receiving enlightenment, right at the moment when it’s happening; unless I count myself.
From my perspective, QM is far more likely to be derived from ’t Hooft’s holographic determinism, and the idea of personal identity as a fungible pattern is just (in historical terms) a fad resulting from the incomplete state of our science, so I certainly regard your excitement as based mostly on an illusion. It’s good that you’re having exciting ideas and new thoughts, and perhaps it’s even appropriate to will yourself to believe them, because that’s a way of testing them against the whole of the rest of your experience.
But I still find it interesting how it is that people come to think that they know something new, when they don’t actually know it. How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
From experiences back when I was young and religious, I’ve learned to recognize moments of satori as not much more than a high (have probably had 2-3 prior). I enjoy the experience, but I’ve learned skepticism and try not to place too much weight on them. I was more describing the causes for my emotional states rather than proclaiming new beliefs. But to be completely honest, for several minutes I was convinced that I had found the tree of life, so I won’t completely downplay what I wrote.
I suspect it has evopsych roots relating to confidence, the measured benefits of a life with purpose, and good-enough knowledge.
Reading ‘t Hooft’s paper I could understand what he was saying, but I’m realizing that the physics is out of my current depth. And I understand the argument you explained about the flaws in spatial (as opposed to configuration) locality. I’ll update my statement that ‘Many-Worlds is intuitively correct’ to ‘Copenhagen is intuitively wrong,’ which I suppose is where my original logic should have taken me—I just didn’t consider strong MWI alternatives. Determinism kills quantum suicide, so I’ll have to move down the priority of cyronics (though the ‘if MWI then quantum suicide then cyronics’ logic still holds and I still think cyronics is a good idea. I do love me a good hedge bet). But like I said, I’m not at all qualified to start assigning likelyhoods here between different QM origins. This requires more study.
I don’t see the issue with consciousness as being represented by the pattern of our brains rather than the physicality of it. You are right that we may eventually find that we can never look at a brain with high enough resolution to emulate it. But based on cases of people entering a several-hour freeze before being revived, the consciousness mechanism is obviously robust and I say this points towards it being an engineering problem of getting everything correct enough. The viability of putting it on a computer once you have a high enough resolution scan is not an issue—worst case scenario you start from something like QM and work up. Again this assumes a level of the brain’s robustness (rounding errors shouldn’t crash the mind), but I would call that experimentally proven in today’s humans.
Note also that some of the recent papers do explicitly discuss causality issues. See e.g. this one.
Hmm, would you be willing to bet on either the 10% claim or the 80% claim?
Everything you have said until the last paragraph seems reasonable to me, and then those extremely high probabilities jump out.
Not necessarily, there could be a distinguished frame of reference.
That might preserve before-and-after. It wouldn’t preserve the locality of causality. Once you throw away c, you might need to take the entire frame of the universe into account when calculating the temporal successor at any given point, rather than just the immediate spatial neighborhood.
There could be some other special velocity than c. Like, imagine there’s some special reference frame in which you can send superluminal signals at exactly 2.71828 c in any direction. In other reference frames, this special velocity depends on which direction you send the signal. Lorentz invariance is broken. But the only implication for local causality is that you need to make your bubble 2.71828 times bigger.