I haven’t yet particularly seen anyone else point out that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it. (In fact I hadn’t yet thought of how to do it at the time I wrote Harry’s panic attack in Ch. 14 of HPMOR, though a primary literary goal of that scene was to promise my readers that Harry would not turn out to be living in a computer simulation. I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it, but I’m not sure.)
The requisite behavior of the Time Turner is known as Stable Time Loops on the wiki that will ruin your life, and known as the Novikov self-consistency principle to physicists discussing “closed timelike curve” solutions to General Relativity. Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.
I haven’t yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality. This strikes me as important, so any precedent for it or pointer to related work would be much appreciated.
The relationship between continuous causal diagrams and the modern laws of physics that you described was fascinating. What’s the mainstream status of that?
Showed up in Penrose’s “The Fabric of Reality.” Curvature of spacetime is determined by infinitesimal light cones at each point. You can get a uniquely determined surface from a connection as well as a connection from a surface.
Obviously physicists totally know about causality being restricted to the light cone! And “curvature of space = light cones at each point” isn’t Penrose, it’s standard General Relativity.
I think probably Penrose’s “The Road to Reality” was intended. I don’t think there’s anything in the Deutsch book like “curvature of spacetime is determined by infinitesimal light cones”; I don’t think I’ve read the relevant bits of the Penrose but it seems like exactly the sort of thing that would be in it.
Odd, the last paragraph of the above seems to have gotten chopped. Restored. No, I haven’t particularly heard anyone else point that out but wouldn’t be surprised to find someone had. It’s an important point and I would also like to know if anyone has developed it further.
I found that idea so intriguing I made an account.
Have you considered that such a causal graph can be rearranged while preserving the arrows? I’m inclined to say, for example, that by moving your node E to be on the same level—simultaneous with—B and C, and squishing D into the middle, you’ve done something akin to taking a Lorentz transform?
I would go further to say that the act of choosing a “cut” of a discrete causal graph—and we assume that B, C, and D share some common ancestor to prevent completely arranging things—corresponds to the act of the choosing a reference frame in Minkowski space. Which makes me wonder if max-flow algorithms have a continuous generalization.
edit: in fact, max-flows might be related to Lagrangians. See this.
space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination
Mind officially blown once again. I feel something analogous to how I imagine someone who had been a heroin addict in the OB-bookblogging time period and in methadone treatment during the subsequent non-EY-non-Yvain-LW time period would feel upon shooting up today. Hey Mr. Tambourine Man, play a song for me / In the jingle-jangle morning I’ll come following you.
finitely Turing-compute a discrete universe with self-consistent Time-Turners in it
In computational physics, the notion of self-consistent solutions is ubiquitous. For example, the behaviour of charged particles depends on the electromagnetic fields, and the electromagnetic fields depend on the behaviour of charged particles, and there is no “preferred direction” in this interaction. Not surprisingly, much research has been done on methods of obtaining (approximations of) such self-consistent solutions, notably in plasma physics and quantum chemistry. justsomeexamples.
It is true that these examples do not involve time travel, but I expect the mathematics to be quite similar, with the exception that these physics-based examples tend to have (should have) uniquely defined solutions.
I didn’t think you were claiming that, I was merely pointing out that the fact that self-consistent solutions can be calculated may not be that surprising.
The Novikov self-consistency principle has already been invented; the question was whether there was precedent for “You can actually compute consistent histories for discrete universes.” Discrete, not continuous.
Yes, hence, “In computational physics”, a branch of physics which necessarily deals with discrete approximations of “true” continuous physics. It seems really quite similar, I can even give actual examples of (somewhat exotic) algorithms where information from the future state is used to calculate the future state, very analogous to your description of a time-travelling game of life.
CDT is particularly interesting for its ability to predict the correct macroscopic dimensionality of spacetime:
″ At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time”
I was going to reply with something similar. Kevin Knuth in particular has an interesting paper deriving special relativity from causal sets: http://arxiv.org/abs/1005.4172
Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.
It replaces the exponential time requirement with an exactly analogous exponential MTBF reliability requirement. I’m surprised by how infrequently this is pointed out in such discussions, since it seems to me rather important.
I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can’t improve on that, you aren’t getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you’re making solvable expensive problems cheap, but you’re not making previously unsolvable problems solvable.)
In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can’t cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.
Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it’s not the indistinguishable from magic NP-solving machine some people seem to think it is.
A time loop amounts to a pocket eternity. How will you power the computer? Drop a sun in there, pick out a brown dwarf. That gives you maybe ten billion years of compute time, which isn’t much.
I was assuming a wormhole-like device with a timelike separation between the entrance and exit. The computer takes a problem statement and an ordering over the solution space, then receives a proposed solution from the time machine. It checks the solution for validity, and if valid sends the same solution into the time machine. If not valid, it sends the lexically following solution back. The computer experiences no additional time in relation to the operator and the rest of the universe, and the only thing that goes through the time machine is a bit string equal to the answer (plus whatever photons or other physical representation is required to store that information).
In other words, exactly the protocol Harry uses in HPMoR.
Is there some reason this protocol is invalid? If so, I don’t believe I’ve seen it discussed in the literature.
Now here’s the panic situation: What happens if the computer experiences a malfunction or bug, such that the validation subroutine fails and always outputs not-valid? If the answer is sent back further in time, can the entire problem be simplified to “We will ask any question we want, get a true answer, and then sometime in the future send those answers back to ourselves?”
If so, all we need do in the present is figure out how to build the receiver for messages from the future: those messages will themselves explain how to build the transmitter.
The wormhole-like approach cannot send a message to a time before both ends of the wormhole are created. I strongly suspect this will be true of any logically consistent time travel device.
And yes, you can get answers to arbitrarily complex questions that way, but as they get difficult, you need to check them with high reliability.
Is it possible to create a wormhole exit without knowing how to do so? If so, how likely is it that there is a wormhole somewhere within listening range?
As for checking the answers, I use the gold standard of reliability: did it work? If it does work, the answer is sent back to the initiating point. If it doesn’t work, send the next answer in the countable answer space back.
If the answer can’t be shown to be in a countable answer space (the countable answer space includes every finite sequence of bits, and therefore is larger than the space of the possible outputs of every Turing Machine), then don’t ask the question. I’m not sure what question you could ask that can’t be answered in a series of bits.
Of course, that means that the first (and probably) last message sent back through time will be some variant of “Do not mess with time” It would take a ballsy engineer indeed to decide that the proper response to trying the solution “Do not mess with time” is to conclude that it failed and send the message “Do not mess with timf”
My very limited understanding is that wormholes only make logical sense with two endpoints. They are, quite literally, a topological feature of space that is a hole in the same sense as a donut has a hole. Except that the donut only has a two dimensional surface, unlike spacetime.
My mostly unfounded assumption is that other time traveling schemes are likely to be similar.
How do you plan to answer the question “did it work?” with an error rate lower than, say, 2^-100? What happens if you accidentally hit the wrong button? No one has ever tested a machine of any sort to that standard of reliability, or even terribly close. And even if you did, you still haven’t done well enough to send a 126 bit message, such as “Do not mess with time” with any reliability.
Because thermodynamics and Shannon entropy are equivalent, all computationally reversible processes are thermodynamically reversible as well, at least in principle. Thus, you only need to “consume” power when doing a destructive update (i.e., overwriting memory locations) - and the minimum amount of energy necessary to do this per-bit is known, just like the maximum efficiency of a heat engine is known.
Of course, for a closed timelike loop, the entire process has to return to its start state, which means there is theoretically zero net energy loss (otherwise the loop wouldn’t be stable).
It’s also interesting how few people seem to realize that Scott Aaronson’s time loop logic is basically a form of branching timelines rather than HP’s one consistent universe.
I haven’t yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality.
Yeah, this is one of the most profound things I’ve ever read. This is a RIDICULOUSLY good post.
The ‘c is the generalization of locality’ bit looked rather trivial to me. Maybe that’s just EY rubbing off on me, but...
Its obvious that in Conways Game, it takes at least 5 iterations for one cell to affect a cell 5 units away, and c has for some time seemed to me like our worlds version of that law
It is rarely appreciated that the Novikov self-consistency principle is a trivial consequence of the uniqueness of the metric tensor (up to diffeomorphisms) in GR.
Indeed, given that (a neighborhood of) each spacetime point, even in a spacetime with CTCs, has a unique metric, it also has unique stress-energy tensor derived from this metric (you neighborhoods to do derivatives). So there is a unique matter content at each spacetime point. In other words, your grandfather cannot be alternately alive (first time through the loop) or dead (when you kill him the second time through the loop) at a given moment in space and time.
The unfortunate fact that we can even imagine the grandfather paradox to begin with is due to our intuitive thinking that that the spacetime is only a background for “real events”, a picture as incompatible with GR as perfectly rigid bodies are with SR.
The total four-momentum may well be the same in both case, but the stress-energy-momentum tensor is different (the blood is moving in the live grandfather but not the dead one, etc., etc.)
I’ve seen academic physicists use postselection to simulate closed timelike curves; see for instance this arXiv paper, which compares a postselection procedure to a mathematical formalism for CTCs.
I tend to believe that most fictional characters are living in malicious computer simulations, to satisfy my own pathological desire for consistency. I now believe that Harry is living in an extremely expensive computer simulation.
I know that the idea of “different systems of local consistency constraints on full spacetimes might or might not happen to yield forward-sampleable causality or things close to it” shows up in Wolfram’s “A New Kind of Science”, for all that he usually refuses to admit the possible relevance of probability or nondeterminism whenever he can avoid doing so; the idea might also be in earlier literature.
that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it.
I’d thought about that a long time previously (not about Time-Turners; this was before I’d heard of Harry Potter). I remember noting that it only really works if multiple transitions are allowed from some states, because otherwise there’s a much higher chance that the consistency constraints would not leave any histories permitted. (“Histories”, because I didn’t know model theory at the time. I was using cellular automata as the example system, though.) (I later concluded that Markov graphical models with weights other than 1 and 0 were a less brittle way to formulate that sort of intuition (although, once you start thinking about configuration weights, you notice that you have problems about how to update if different weight schemes would lead to different partition function) values).)
I think there might have been an LW comment somewhere that put me on that track
I know we argued briefly at one point about whether Harry could take the existence of his subjective experience as valid anthropic evidence about whether or not he was in a simulation. I think I was trying to make the argument specifically about whether or not Harry could be sure he wasn’t in a simulation of a trial timeline that was going to be ruled inconsistent. (Or, implicitly, a timeline that he might be able to control whether or not it would be ruled inconsistent. Or maybe it was about whether or not he could be sure that there hadn’t been such simulations.) But I don’t remember you agreeing that my position was plausible, and it’s possible that that means I didn’t convey the information about which scenario I was trying to argue about. In that case, you wouldn’t have heard of the idea from me. Or I might have only had enough time to figure out how to halfway defensibly express a lesser idea: that of “trial simulated timelines being iterated until a fixed point”.
You can do some sort of lazy evaluation. I took the example you gave with the 4x4 grid (by the way you have a typo: “we shall take a 3x3 Life grid”), and ran it forwards, and it converges to all empty squares in 4 steps. See this doc for calculations.
Even if it doesn’t converge, you can add another symbol to the system and continue playing the game with it. You can think of the symbol as a function. In my document x = compute_cell(x=2,y=2,t=2)
I don’t totally understand it, but Zuse 1969 seems to talk about spacetime as a sort of discrete causal graph with c as the generalization of locality (“In any case, a relation between the speed of light and the speed of transmission between the individual cells of the cellular automaton must result from such a model.”). Fredkin and Wolfram probably also have similar discussions.
I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it
I certainly made a remark on LW, very early in HPMoR, along the following lines: If magic, or anything else that seems to operate fundamentally at the level of human-like concepts, turns out to be real, then we should see that as substantial evidence for some kind of simulation/creation hypothesis. So if you find yourself in the role of Harry Potter, you should expect that you’re in a simulation, or in a universe created by gods, or in someone’s dream … or the subject of a book :-).
I don’t think you made any comment on that, so I’ve no idea whether you read it. I expect other people made similar points.
It’s more immediately plausible to hypothesize that certain phenomena and regularities in Harry’s experience are intelligently designed, rather than that the entire universe Harry occupies is. We can make much stronger inferences about intelligences within our universe being similar to us, than about intelligences who created our universe being similar to us, since, being outside our universe/simulation, they would not necessarily exist even in the same kind of logical structure that we do.
I’m not sure how to respond to this; the ability to compute it in a finite fashion for discrete universes seemed trivially obvious to me when I first pondered the problem. It would never have occurred to me to actually write it down as an insight because it seemed like something you’d figure out within five minutes regardless.
“Well, we know there are things that can’t happen because there are paradoxes, so just compute all the ones that can and pick one. It might even be possible to jig things such that the outcome is always well determined, but I’d have to think harder about that.”
That said, this may just be a difference in background. When I was young, I did a lot of thinking about Conway’s Life and in particular “garden of eve” states which have no precursor. Once you consider the possibility of garden of eve states and realize that some Life universes have a strict ‘start time’, you automatically start thinking about what other kinds of universes would be restricted. Adding a rule with time travel is just one step farther.
On the other hand, the space/time causal graph generalization is definitely something I didn’t think about and isn’t even something I’d heard vaguely mentioned. That one I’ll have to put some thought into.
Mainstream status:
I haven’t yet particularly seen anyone else point out that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it. (In fact I hadn’t yet thought of how to do it at the time I wrote Harry’s panic attack in Ch. 14 of HPMOR, though a primary literary goal of that scene was to promise my readers that Harry would not turn out to be living in a computer simulation. I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it, but I’m not sure.)
The requisite behavior of the Time Turner is known as Stable Time Loops on the wiki that will ruin your life, and known as the Novikov self-consistency principle to physicists discussing “closed timelike curve” solutions to General Relativity. Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.
I haven’t yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality. This strikes me as important, so any precedent for it or pointer to related work would be much appreciated.
The relationship between continuous causal diagrams and the modern laws of physics that you described was fascinating. What’s the mainstream status of that?
Showed up in Penrose’s “The Fabric of Reality.” Curvature of spacetime is determined by infinitesimal light cones at each point. You can get a uniquely determined surface from a connection as well as a connection from a surface.
Obviously physicists totally know about causality being restricted to the light cone! And “curvature of space = light cones at each point” isn’t Penrose, it’s standard General Relativity.
Not claiming it’s his own idea, just that it showed up in the book, I assume it’s standard.
David Deutsch, not Roger Penrose. Or wrong title.
I think probably Penrose’s “The Road to Reality” was intended. I don’t think there’s anything in the Deutsch book like “curvature of spacetime is determined by infinitesimal light cones”; I don’t think I’ve read the relevant bits of the Penrose but it seems like exactly the sort of thing that would be in it.
Page number?
Odd, the last paragraph of the above seems to have gotten chopped. Restored. No, I haven’t particularly heard anyone else point that out but wouldn’t be surprised to find someone had. It’s an important point and I would also like to know if anyone has developed it further.
I found that idea so intriguing I made an account.
Have you considered that such a causal graph can be rearranged while preserving the arrows? I’m inclined to say, for example, that by moving your node E to be on the same level—simultaneous with—B and C, and squishing D into the middle, you’ve done something akin to taking a Lorentz transform?
I would go further to say that the act of choosing a “cut” of a discrete causal graph—and we assume that B, C, and D share some common ancestor to prevent completely arranging things—corresponds to the act of the choosing a reference frame in Minkowski space. Which makes me wonder if max-flow algorithms have a continuous generalization.
edit: in fact, max-flows might be related to Lagrangians. See this.
Mind officially blown once again. I feel something analogous to how I imagine someone who had been a heroin addict in the OB-bookblogging time period and in methadone treatment during the subsequent non-EY-non-Yvain-LW time period would feel upon shooting up today. Hey Mr. Tambourine Man, play a song for me / In the jingle-jangle morning I’ll come following you.
Seconded.
In computational physics, the notion of self-consistent solutions is ubiquitous. For example, the behaviour of charged particles depends on the electromagnetic fields, and the electromagnetic fields depend on the behaviour of charged particles, and there is no “preferred direction” in this interaction. Not surprisingly, much research has been done on methods of obtaining (approximations of) such self-consistent solutions, notably in plasma physics and quantum chemistry. just some examples.
It is true that these examples do not involve time travel, but I expect the mathematics to be quite similar, with the exception that these physics-based examples tend to have (should have) uniquely defined solutions.
Er, I was not claiming to have invented the notion of an equilibrium but thank you for pointing this out.
I didn’t think you were claiming that, I was merely pointing out that the fact that self-consistent solutions can be calculated may not be that surprising.
The Novikov self-consistency principle has already been invented; the question was whether there was precedent for “You can actually compute consistent histories for discrete universes.” Discrete, not continuous.
Yes, hence, “In computational physics”, a branch of physics which necessarily deals with discrete approximations of “true” continuous physics. It seems really quite similar, I can even give actual examples of (somewhat exotic) algorithms where information from the future state is used to calculate the future state, very analogous to your description of a time-travelling game of life.
There are precedents and parallels in Causal Sets and Causal Dynamical Triangulation
CDT is particularly interesting for its ability to predict the correct macroscopic dimensionality of spacetime:
″ At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time”
I was going to reply with something similar. Kevin Knuth in particular has an interesting paper deriving special relativity from causal sets: http://arxiv.org/abs/1005.4172
It replaces the exponential time requirement with an exactly analogous exponential MTBF reliability requirement. I’m surprised by how infrequently this is pointed out in such discussions, since it seems to me rather important.
It’s true that it requires an exponentially small error rate, but that’s cheap, so why emphasize it?
I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can’t improve on that, you aren’t getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you’re making solvable expensive problems cheap, but you’re not making previously unsolvable problems solvable.)
In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can’t cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.
Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it’s not the indistinguishable from magic NP-solving machine some people seem to think it is.
And fuel requirements too, for similar reasons.
Why do the fuel requirements go up? Where did they come from in the first place?
A time loop amounts to a pocket eternity. How will you power the computer? Drop a sun in there, pick out a brown dwarf. That gives you maybe ten billion years of compute time, which isn’t much.
I was assuming a wormhole-like device with a timelike separation between the entrance and exit. The computer takes a problem statement and an ordering over the solution space, then receives a proposed solution from the time machine. It checks the solution for validity, and if valid sends the same solution into the time machine. If not valid, it sends the lexically following solution back. The computer experiences no additional time in relation to the operator and the rest of the universe, and the only thing that goes through the time machine is a bit string equal to the answer (plus whatever photons or other physical representation is required to store that information).
In other words, exactly the protocol Harry uses in HPMoR.
Is there some reason this protocol is invalid? If so, I don’t believe I’ve seen it discussed in the literature.
Now here’s the panic situation: What happens if the computer experiences a malfunction or bug, such that the validation subroutine fails and always outputs not-valid? If the answer is sent back further in time, can the entire problem be simplified to “We will ask any question we want, get a true answer, and then sometime in the future send those answers back to ourselves?”
If so, all we need do in the present is figure out how to build the receiver for messages from the future: those messages will themselves explain how to build the transmitter.
The wormhole-like approach cannot send a message to a time before both ends of the wormhole are created. I strongly suspect this will be true of any logically consistent time travel device.
And yes, you can get answers to arbitrarily complex questions that way, but as they get difficult, you need to check them with high reliability.
Is it possible to create a wormhole exit without knowing how to do so? If so, how likely is it that there is a wormhole somewhere within listening range?
As for checking the answers, I use the gold standard of reliability: did it work? If it does work, the answer is sent back to the initiating point. If it doesn’t work, send the next answer in the countable answer space back.
If the answer can’t be shown to be in a countable answer space (the countable answer space includes every finite sequence of bits, and therefore is larger than the space of the possible outputs of every Turing Machine), then don’t ask the question. I’m not sure what question you could ask that can’t be answered in a series of bits.
Of course, that means that the first (and probably) last message sent back through time will be some variant of “Do not mess with time” It would take a ballsy engineer indeed to decide that the proper response to trying the solution “Do not mess with time” is to conclude that it failed and send the message “Do not mess with timf”
My very limited understanding is that wormholes only make logical sense with two endpoints. They are, quite literally, a topological feature of space that is a hole in the same sense as a donut has a hole. Except that the donut only has a two dimensional surface, unlike spacetime.
My mostly unfounded assumption is that other time traveling schemes are likely to be similar.
How do you plan to answer the question “did it work?” with an error rate lower than, say, 2^-100? What happens if you accidentally hit the wrong button? No one has ever tested a machine of any sort to that standard of reliability, or even terribly close. And even if you did, you still haven’t done well enough to send a 126 bit message, such as “Do not mess with time” with any reliability.
I ask the future how they will did it.
I was going to say “bootstraps don’t work that way”, but since the validation happens on the future end, this might actually work.
Because thermodynamics and Shannon entropy are equivalent, all computationally reversible processes are thermodynamically reversible as well, at least in principle. Thus, you only need to “consume” power when doing a destructive update (i.e., overwriting memory locations) - and the minimum amount of energy necessary to do this per-bit is known, just like the maximum efficiency of a heat engine is known.
Of course, for a closed timelike loop, the entire process has to return to its start state, which means there is theoretically zero net energy loss (otherwise the loop wouldn’t be stable).
Can’t you just receive a packet of data from the future, verify it, then send it back into the past? Wouldn’t that avoid having an eternal computer?
It’s also interesting how few people seem to realize that Scott Aaronson’s time loop logic is basically a form of branching timelines rather than HP’s one consistent universe.
Yeah, this is one of the most profound things I’ve ever read. This is a RIDICULOUSLY good post.
The ‘c is the generalization of locality’ bit looked rather trivial to me. Maybe that’s just EY rubbing off on me, but...
Its obvious that in Conways Game, it takes at least 5 iterations for one cell to affect a cell 5 units away, and c has for some time seemed to me like our worlds version of that law
It is rarely appreciated that the Novikov self-consistency principle is a trivial consequence of the uniqueness of the metric tensor (up to diffeomorphisms) in GR.
Indeed, given that (a neighborhood of) each spacetime point, even in a spacetime with CTCs, has a unique metric, it also has unique stress-energy tensor derived from this metric (you neighborhoods to do derivatives). So there is a unique matter content at each spacetime point. In other words, your grandfather cannot be alternately alive (first time through the loop) or dead (when you kill him the second time through the loop) at a given moment in space and time.
The unfortunate fact that we can even imagine the grandfather paradox to begin with is due to our intuitive thinking that that the spacetime is only a background for “real events”, a picture as incompatible with GR as perfectly rigid bodies are with SR.
How does the mass-energy of a dead grandfather differ from the mass-energy of a live one?
Pretty drastically. One is decaying in the ground, the other is moving about in search of a mate. Most people have no trouble telling the difference.
The total four-momentum may well be the same in both case, but the stress-energy-momentum tensor is different (the blood is moving in the live grandfather but not the dead one, etc., etc.)
I’ve seen academic physicists use postselection to simulate closed timelike curves; see for instance this arXiv paper, which compares a postselection procedure to a mathematical formalism for CTCs.
I tend to believe that most fictional characters are living in malicious computer simulations, to satisfy my own pathological desire for consistency. I now believe that Harry is living in an extremely expensive computer simulation.
Also known as Eliezer Yudkowsky’s brain.
I know that the idea of “different systems of local consistency constraints on full spacetimes might or might not happen to yield forward-sampleable causality or things close to it” shows up in Wolfram’s “A New Kind of Science”, for all that he usually refuses to admit the possible relevance of probability or nondeterminism whenever he can avoid doing so; the idea might also be in earlier literature.
I’d thought about that a long time previously (not about Time-Turners; this was before I’d heard of Harry Potter). I remember noting that it only really works if multiple transitions are allowed from some states, because otherwise there’s a much higher chance that the consistency constraints would not leave any histories permitted. (“Histories”, because I didn’t know model theory at the time. I was using cellular automata as the example system, though.) (I later concluded that Markov graphical models with weights other than 1 and 0 were a less brittle way to formulate that sort of intuition (although, once you start thinking about configuration weights, you notice that you have problems about how to update if different weight schemes would lead to different partition function) values).)
I know we argued briefly at one point about whether Harry could take the existence of his subjective experience as valid anthropic evidence about whether or not he was in a simulation. I think I was trying to make the argument specifically about whether or not Harry could be sure he wasn’t in a simulation of a trial timeline that was going to be ruled inconsistent. (Or, implicitly, a timeline that he might be able to control whether or not it would be ruled inconsistent. Or maybe it was about whether or not he could be sure that there hadn’t been such simulations.) But I don’t remember you agreeing that my position was plausible, and it’s possible that that means I didn’t convey the information about which scenario I was trying to argue about. In that case, you wouldn’t have heard of the idea from me. Or I might have only had enough time to figure out how to halfway defensibly express a lesser idea: that of “trial simulated timelines being iterated until a fixed point”.
You can do some sort of lazy evaluation. I took the example you gave with the 4x4 grid (by the way you have a typo: “we shall take a 3x3 Life grid”), and ran it forwards, and it converges to all empty squares in 4 steps. See this doc for calculations.
Even if it doesn’t converge, you can add another symbol to the system and continue playing the game with it. You can think of the symbol as a function. In my document x = compute_cell(x=2,y=2,t=2)
Fixed.
I don’t totally understand it, but Zuse 1969 seems to talk about spacetime as a sort of discrete causal graph with c as the generalization of locality (“In any case, a relation between the speed of light and the speed of transmission between the individual cells of the cellular automaton must result from such a model.”). Fredkin and Wolfram probably also have similar discussions.
I certainly made a remark on LW, very early in HPMoR, along the following lines: If magic, or anything else that seems to operate fundamentally at the level of human-like concepts, turns out to be real, then we should see that as substantial evidence for some kind of simulation/creation hypothesis. So if you find yourself in the role of Harry Potter, you should expect that you’re in a simulation, or in a universe created by gods, or in someone’s dream … or the subject of a book :-).
I don’t think you made any comment on that, so I’ve no idea whether you read it. I expect other people made similar points.
It’s more immediately plausible to hypothesize that certain phenomena and regularities in Harry’s experience are intelligently designed, rather than that the entire universe Harry occupies is. We can make much stronger inferences about intelligences within our universe being similar to us, than about intelligences who created our universe being similar to us, since, being outside our universe/simulation, they would not necessarily exist even in the same kind of logical structure that we do.
I’m not sure how to respond to this; the ability to compute it in a finite fashion for discrete universes seemed trivially obvious to me when I first pondered the problem. It would never have occurred to me to actually write it down as an insight because it seemed like something you’d figure out within five minutes regardless.
“Well, we know there are things that can’t happen because there are paradoxes, so just compute all the ones that can and pick one. It might even be possible to jig things such that the outcome is always well determined, but I’d have to think harder about that.”
That said, this may just be a difference in background. When I was young, I did a lot of thinking about Conway’s Life and in particular “garden of eve” states which have no precursor. Once you consider the possibility of garden of eve states and realize that some Life universes have a strict ‘start time’, you automatically start thinking about what other kinds of universes would be restricted. Adding a rule with time travel is just one step farther.
On the other hand, the space/time causal graph generalization is definitely something I didn’t think about and isn’t even something I’d heard vaguely mentioned. That one I’ll have to put some thought into.