Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.
It replaces the exponential time requirement with an exactly analogous exponential MTBF reliability requirement. I’m surprised by how infrequently this is pointed out in such discussions, since it seems to me rather important.
I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can’t improve on that, you aren’t getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you’re making solvable expensive problems cheap, but you’re not making previously unsolvable problems solvable.)
In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can’t cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.
Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it’s not the indistinguishable from magic NP-solving machine some people seem to think it is.
A time loop amounts to a pocket eternity. How will you power the computer? Drop a sun in there, pick out a brown dwarf. That gives you maybe ten billion years of compute time, which isn’t much.
I was assuming a wormhole-like device with a timelike separation between the entrance and exit. The computer takes a problem statement and an ordering over the solution space, then receives a proposed solution from the time machine. It checks the solution for validity, and if valid sends the same solution into the time machine. If not valid, it sends the lexically following solution back. The computer experiences no additional time in relation to the operator and the rest of the universe, and the only thing that goes through the time machine is a bit string equal to the answer (plus whatever photons or other physical representation is required to store that information).
In other words, exactly the protocol Harry uses in HPMoR.
Is there some reason this protocol is invalid? If so, I don’t believe I’ve seen it discussed in the literature.
Now here’s the panic situation: What happens if the computer experiences a malfunction or bug, such that the validation subroutine fails and always outputs not-valid? If the answer is sent back further in time, can the entire problem be simplified to “We will ask any question we want, get a true answer, and then sometime in the future send those answers back to ourselves?”
If so, all we need do in the present is figure out how to build the receiver for messages from the future: those messages will themselves explain how to build the transmitter.
The wormhole-like approach cannot send a message to a time before both ends of the wormhole are created. I strongly suspect this will be true of any logically consistent time travel device.
And yes, you can get answers to arbitrarily complex questions that way, but as they get difficult, you need to check them with high reliability.
Is it possible to create a wormhole exit without knowing how to do so? If so, how likely is it that there is a wormhole somewhere within listening range?
As for checking the answers, I use the gold standard of reliability: did it work? If it does work, the answer is sent back to the initiating point. If it doesn’t work, send the next answer in the countable answer space back.
If the answer can’t be shown to be in a countable answer space (the countable answer space includes every finite sequence of bits, and therefore is larger than the space of the possible outputs of every Turing Machine), then don’t ask the question. I’m not sure what question you could ask that can’t be answered in a series of bits.
Of course, that means that the first (and probably) last message sent back through time will be some variant of “Do not mess with time” It would take a ballsy engineer indeed to decide that the proper response to trying the solution “Do not mess with time” is to conclude that it failed and send the message “Do not mess with timf”
My very limited understanding is that wormholes only make logical sense with two endpoints. They are, quite literally, a topological feature of space that is a hole in the same sense as a donut has a hole. Except that the donut only has a two dimensional surface, unlike spacetime.
My mostly unfounded assumption is that other time traveling schemes are likely to be similar.
How do you plan to answer the question “did it work?” with an error rate lower than, say, 2^-100? What happens if you accidentally hit the wrong button? No one has ever tested a machine of any sort to that standard of reliability, or even terribly close. And even if you did, you still haven’t done well enough to send a 126 bit message, such as “Do not mess with time” with any reliability.
Because thermodynamics and Shannon entropy are equivalent, all computationally reversible processes are thermodynamically reversible as well, at least in principle. Thus, you only need to “consume” power when doing a destructive update (i.e., overwriting memory locations) - and the minimum amount of energy necessary to do this per-bit is known, just like the maximum efficiency of a heat engine is known.
Of course, for a closed timelike loop, the entire process has to return to its start state, which means there is theoretically zero net energy loss (otherwise the loop wouldn’t be stable).
It’s also interesting how few people seem to realize that Scott Aaronson’s time loop logic is basically a form of branching timelines rather than HP’s one consistent universe.
It replaces the exponential time requirement with an exactly analogous exponential MTBF reliability requirement. I’m surprised by how infrequently this is pointed out in such discussions, since it seems to me rather important.
It’s true that it requires an exponentially small error rate, but that’s cheap, so why emphasize it?
I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can’t improve on that, you aren’t getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you’re making solvable expensive problems cheap, but you’re not making previously unsolvable problems solvable.)
In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can’t cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.
Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it’s not the indistinguishable from magic NP-solving machine some people seem to think it is.
And fuel requirements too, for similar reasons.
Why do the fuel requirements go up? Where did they come from in the first place?
A time loop amounts to a pocket eternity. How will you power the computer? Drop a sun in there, pick out a brown dwarf. That gives you maybe ten billion years of compute time, which isn’t much.
I was assuming a wormhole-like device with a timelike separation between the entrance and exit. The computer takes a problem statement and an ordering over the solution space, then receives a proposed solution from the time machine. It checks the solution for validity, and if valid sends the same solution into the time machine. If not valid, it sends the lexically following solution back. The computer experiences no additional time in relation to the operator and the rest of the universe, and the only thing that goes through the time machine is a bit string equal to the answer (plus whatever photons or other physical representation is required to store that information).
In other words, exactly the protocol Harry uses in HPMoR.
Is there some reason this protocol is invalid? If so, I don’t believe I’ve seen it discussed in the literature.
Now here’s the panic situation: What happens if the computer experiences a malfunction or bug, such that the validation subroutine fails and always outputs not-valid? If the answer is sent back further in time, can the entire problem be simplified to “We will ask any question we want, get a true answer, and then sometime in the future send those answers back to ourselves?”
If so, all we need do in the present is figure out how to build the receiver for messages from the future: those messages will themselves explain how to build the transmitter.
The wormhole-like approach cannot send a message to a time before both ends of the wormhole are created. I strongly suspect this will be true of any logically consistent time travel device.
And yes, you can get answers to arbitrarily complex questions that way, but as they get difficult, you need to check them with high reliability.
Is it possible to create a wormhole exit without knowing how to do so? If so, how likely is it that there is a wormhole somewhere within listening range?
As for checking the answers, I use the gold standard of reliability: did it work? If it does work, the answer is sent back to the initiating point. If it doesn’t work, send the next answer in the countable answer space back.
If the answer can’t be shown to be in a countable answer space (the countable answer space includes every finite sequence of bits, and therefore is larger than the space of the possible outputs of every Turing Machine), then don’t ask the question. I’m not sure what question you could ask that can’t be answered in a series of bits.
Of course, that means that the first (and probably) last message sent back through time will be some variant of “Do not mess with time” It would take a ballsy engineer indeed to decide that the proper response to trying the solution “Do not mess with time” is to conclude that it failed and send the message “Do not mess with timf”
My very limited understanding is that wormholes only make logical sense with two endpoints. They are, quite literally, a topological feature of space that is a hole in the same sense as a donut has a hole. Except that the donut only has a two dimensional surface, unlike spacetime.
My mostly unfounded assumption is that other time traveling schemes are likely to be similar.
How do you plan to answer the question “did it work?” with an error rate lower than, say, 2^-100? What happens if you accidentally hit the wrong button? No one has ever tested a machine of any sort to that standard of reliability, or even terribly close. And even if you did, you still haven’t done well enough to send a 126 bit message, such as “Do not mess with time” with any reliability.
I ask the future how they will did it.
I was going to say “bootstraps don’t work that way”, but since the validation happens on the future end, this might actually work.
Because thermodynamics and Shannon entropy are equivalent, all computationally reversible processes are thermodynamically reversible as well, at least in principle. Thus, you only need to “consume” power when doing a destructive update (i.e., overwriting memory locations) - and the minimum amount of energy necessary to do this per-bit is known, just like the maximum efficiency of a heat engine is known.
Of course, for a closed timelike loop, the entire process has to return to its start state, which means there is theoretically zero net energy loss (otherwise the loop wouldn’t be stable).
Can’t you just receive a packet of data from the future, verify it, then send it back into the past? Wouldn’t that avoid having an eternal computer?
It’s also interesting how few people seem to realize that Scott Aaronson’s time loop logic is basically a form of branching timelines rather than HP’s one consistent universe.