This is precisely why trying to avoid exponentially-long compute times for PSPACE problems through the use of a time machine requires a computer with exponentially high MTBF.
(Leaving soon, will post math later if anyone is interested in the details.)
Short version: Suppose for simplicity of argument that all the probability of failure is in the portion of the machine that checks whether the received answer is correct, and that it has equal chance of producing a false positive or negative. (Neither of these assumptions is required, but I found it made the math easier to think about when I did it.) Call this error rate e.
Consider the set of possible answers received. For an n-bit answer, this set has size 2^n. Take a probability distribution over this set for the messages received, treat the operation of the machine as a Markov process and find the transition matrix, then set the output probability vector equal to the input, and you get that the probability vector is the eigenvector of the transition matrix (with the added constraint that it be a valid distribution).
You’ll find that the maximum value of e for which the probability distribution concentrates some (fixed) minimum probability at the correct answer goes down exponentially with n.
Neat! I still need to give some thought to the question of where we’re getting our probability distribution, though, when the majority of the computation is done by the universe’s plothole filter.
You get it as the solution to the equation. In a non-time-travel case, you have a fixed initial state (probability distribution is zero in all but one place), and a slightly spread out distribution for the future (errors are possible, if unlikely). If you perform another computation after that, and want to know what the state of the computer will be after performing two computations, you take the probability distribution after the first computation, transform it according to your computation (with possible errors), and get a third distribution.
All that changes here is that we have a constraint that two of the distributions need to be equal to each other. So, add that constraint, and solve for the distribution that fits the constraints.
This is precisely why trying to avoid exponentially-long compute times for PSPACE problems through the use of a time machine requires a computer with exponentially high MTBF.
Why exponentially, precisely?
(Leaving soon, will post math later if anyone is interested in the details.)
Short version: Suppose for simplicity of argument that all the probability of failure is in the portion of the machine that checks whether the received answer is correct, and that it has equal chance of producing a false positive or negative. (Neither of these assumptions is required, but I found it made the math easier to think about when I did it.) Call this error rate e.
Consider the set of possible answers received. For an n-bit answer, this set has size 2^n. Take a probability distribution over this set for the messages received, treat the operation of the machine as a Markov process and find the transition matrix, then set the output probability vector equal to the input, and you get that the probability vector is the eigenvector of the transition matrix (with the added constraint that it be a valid distribution).
You’ll find that the maximum value of e for which the probability distribution concentrates some (fixed) minimum probability at the correct answer goes down exponentially with n.
Neat! I still need to give some thought to the question of where we’re getting our probability distribution, though, when the majority of the computation is done by the universe’s plothole filter.
You get it as the solution to the equation. In a non-time-travel case, you have a fixed initial state (probability distribution is zero in all but one place), and a slightly spread out distribution for the future (errors are possible, if unlikely). If you perform another computation after that, and want to know what the state of the computer will be after performing two computations, you take the probability distribution after the first computation, transform it according to your computation (with possible errors), and get a third distribution.
All that changes here is that we have a constraint that two of the distributions need to be equal to each other. So, add that constraint, and solve for the distribution that fits the constraints.