I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can’t improve on that, you aren’t getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you’re making solvable expensive problems cheap, but you’re not making previously unsolvable problems solvable.)
In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can’t cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.
Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it’s not the indistinguishable from magic NP-solving machine some people seem to think it is.
I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can’t improve on that, you aren’t getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you’re making solvable expensive problems cheap, but you’re not making previously unsolvable problems solvable.)
In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can’t cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.
Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it’s not the indistinguishable from magic NP-solving machine some people seem to think it is.