These are pretty strong arguments, but maybe the idea can still be rescued by handwaving :-)
In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn’t seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn’t receive anything.
Seconding A113′s recommendation of “Be Here Now”, that story along with the movie Primer was my main inspiration for the model.
This is precisely why trying to avoid exponentially-long compute times for PSPACE problems through the use of a time machine requires a computer with exponentially high MTBF.
(Leaving soon, will post math later if anyone is interested in the details.)
Short version: Suppose for simplicity of argument that all the probability of failure is in the portion of the machine that checks whether the received answer is correct, and that it has equal chance of producing a false positive or negative. (Neither of these assumptions is required, but I found it made the math easier to think about when I did it.) Call this error rate e.
Consider the set of possible answers received. For an n-bit answer, this set has size 2^n. Take a probability distribution over this set for the messages received, treat the operation of the machine as a Markov process and find the transition matrix, then set the output probability vector equal to the input, and you get that the probability vector is the eigenvector of the transition matrix (with the added constraint that it be a valid distribution).
You’ll find that the maximum value of e for which the probability distribution concentrates some (fixed) minimum probability at the correct answer goes down exponentially with n.
Neat! I still need to give some thought to the question of where we’re getting our probability distribution, though, when the majority of the computation is done by the universe’s plothole filter.
You get it as the solution to the equation. In a non-time-travel case, you have a fixed initial state (probability distribution is zero in all but one place), and a slightly spread out distribution for the future (errors are possible, if unlikely). If you perform another computation after that, and want to know what the state of the computer will be after performing two computations, you take the probability distribution after the first computation, transform it according to your computation (with possible errors), and get a third distribution.
All that changes here is that we have a constraint that two of the distributions need to be equal to each other. So, add that constraint, and solve for the distribution that fits the constraints.
In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn’t seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn’t receive anything.
Good point, but not actually answering the question. I guess what I’m asking is: given a single use of the time machine (Primer-style, you turn it on and receive an object, then later turn it off and send an object), make a list of all the objects you can receive and what each of them can lead to in the next iteration of the loop. This structure is called a Markov chain. Given the entire structure of the chain, can you deduce what probability you have of experiencing each possibility?
Taking your original example, there are only 2 states the timeline can be in:
A: Nothing arrives from the future. You toss a coin to decide whether to go back in time. Next state: A (50% chance) or B (50% chance) *B: A murderous future self arrives from the future. You and him get into a fight, and don’t send anything back. Next state: A (100% chance).
Is there a way to calculate from this what the probability of actually getting a murderous future self is when you turn on the time machine?
I’m inclined to assume it would be a stationary distribution of the chain, if one exists. That is to say, one where the probability distribution of the “next” timeline is the same as the probability distribution of the “current” timeline. In this case, that would be (A: 2⁄3, B: 1⁄3). (Your result of (A: 4⁄5, B: 1⁄5) seems strange to me: half of the people in A will become killers, and they’re equal in number to their victims in B.)
There are certain conditions that a Markov chain needs to have for a stationary distribution to exist. I looked them up. A chain with a finite number of states (so no infinitely dense CDs for me :( ) fits the bill as long as every state eventually leads to every other, possibly indirectly (i.e. it’s irreducible). So in the first scenario, I’ll receive a CD with a number between 0 and N distributed uniformly. The second scenario isn’t irreducible (if the “first” timeline has a CD with value X, it’s impossible to ever get a CD with value Y in any subsequent timeline), so I guess there needs to be a chance of the CD becoming corrupted to a different value or the time machine exploding before I can send the CD back or something like that.
Teal deer: This model works but the probability of experiencing each outcome can easily depend on the tiny chance of an unexpected outcome. I like it a lot because it’s more intuitive than NSCP but the structure makes more sense than branching-multiverse. I may have to steal it if I ever write a time-travel story.
My original comment had two examples, one had no coinflips, and the other had two coinflips. You seem to be talking about some other scenario which has one coinflip?
The structure I have in mind is a branching tree of time, where each branch has a measure. The root (the moment before any occurrences of time travel) has measure 1, and the measure of each branch is the sum of measures of its descendants. An additional law is that measure is “conserved” through time travel, i.e. when a version of you existing in a branch with measure p travels into the past, the past branches at the point of your arrival, so that your influence is confined to a branch of measure p (which may or may not eventually flow into the branch you came from, depending on other factors). So for example if you’re travelling to prevent a disaster that happened in your past, your chance of success is no higher than the chance of the disaster happening in the first place.
In the scenarios I have looked at, these conditions yield enough linear equations to pin down the measure of each branch, with no need to go through Markov chains. But the general case of multiple time travelers gets kinda hard to reason about. Maybe Markov chains can give a proof for that case as well?
But the general case of multiple time travelers gets kinda hard to reason about.
Since each time-travel event forks the universe, with multiple time travelers it’s a question of whether the the second time-traveler is “fork-traveling” as well.
These are pretty strong arguments, but maybe the idea can still be rescued by handwaving :-)
In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn’t seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn’t receive anything.
Seconding A113′s recommendation of “Be Here Now”, that story along with the movie Primer was my main inspiration for the model.
This is precisely why trying to avoid exponentially-long compute times for PSPACE problems through the use of a time machine requires a computer with exponentially high MTBF.
Why exponentially, precisely?
(Leaving soon, will post math later if anyone is interested in the details.)
Short version: Suppose for simplicity of argument that all the probability of failure is in the portion of the machine that checks whether the received answer is correct, and that it has equal chance of producing a false positive or negative. (Neither of these assumptions is required, but I found it made the math easier to think about when I did it.) Call this error rate e.
Consider the set of possible answers received. For an n-bit answer, this set has size 2^n. Take a probability distribution over this set for the messages received, treat the operation of the machine as a Markov process and find the transition matrix, then set the output probability vector equal to the input, and you get that the probability vector is the eigenvector of the transition matrix (with the added constraint that it be a valid distribution).
You’ll find that the maximum value of e for which the probability distribution concentrates some (fixed) minimum probability at the correct answer goes down exponentially with n.
Neat! I still need to give some thought to the question of where we’re getting our probability distribution, though, when the majority of the computation is done by the universe’s plothole filter.
You get it as the solution to the equation. In a non-time-travel case, you have a fixed initial state (probability distribution is zero in all but one place), and a slightly spread out distribution for the future (errors are possible, if unlikely). If you perform another computation after that, and want to know what the state of the computer will be after performing two computations, you take the probability distribution after the first computation, transform it according to your computation (with possible errors), and get a third distribution.
All that changes here is that we have a constraint that two of the distributions need to be equal to each other. So, add that constraint, and solve for the distribution that fits the constraints.
The later Ed Stories were better.
Good point, but not actually answering the question. I guess what I’m asking is: given a single use of the time machine (Primer-style, you turn it on and receive an object, then later turn it off and send an object), make a list of all the objects you can receive and what each of them can lead to in the next iteration of the loop. This structure is called a Markov chain. Given the entire structure of the chain, can you deduce what probability you have of experiencing each possibility?
Taking your original example, there are only 2 states the timeline can be in:
A: Nothing arrives from the future. You toss a coin to decide whether to go back in time. Next state: A (50% chance) or B (50% chance)
*B: A murderous future self arrives from the future. You and him get into a fight, and don’t send anything back. Next state: A (100% chance).
Is there a way to calculate from this what the probability of actually getting a murderous future self is when you turn on the time machine?
I’m inclined to assume it would be a stationary distribution of the chain, if one exists. That is to say, one where the probability distribution of the “next” timeline is the same as the probability distribution of the “current” timeline. In this case, that would be (A: 2⁄3, B: 1⁄3). (Your result of (A: 4⁄5, B: 1⁄5) seems strange to me: half of the people in A will become killers, and they’re equal in number to their victims in B.)
There are certain conditions that a Markov chain needs to have for a stationary distribution to exist. I looked them up. A chain with a finite number of states (so no infinitely dense CDs for me :( ) fits the bill as long as every state eventually leads to every other, possibly indirectly (i.e. it’s irreducible). So in the first scenario, I’ll receive a CD with a number between 0 and N distributed uniformly. The second scenario isn’t irreducible (if the “first” timeline has a CD with value X, it’s impossible to ever get a CD with value Y in any subsequent timeline), so I guess there needs to be a chance of the CD becoming corrupted to a different value or the time machine exploding before I can send the CD back or something like that.
Teal deer: This model works but the probability of experiencing each outcome can easily depend on the tiny chance of an unexpected outcome. I like it a lot because it’s more intuitive than NSCP but the structure makes more sense than branching-multiverse. I may have to steal it if I ever write a time-travel story.
My original comment had two examples, one had no coinflips, and the other had two coinflips. You seem to be talking about some other scenario which has one coinflip?
The structure I have in mind is a branching tree of time, where each branch has a measure. The root (the moment before any occurrences of time travel) has measure 1, and the measure of each branch is the sum of measures of its descendants. An additional law is that measure is “conserved” through time travel, i.e. when a version of you existing in a branch with measure p travels into the past, the past branches at the point of your arrival, so that your influence is confined to a branch of measure p (which may or may not eventually flow into the branch you came from, depending on other factors). So for example if you’re travelling to prevent a disaster that happened in your past, your chance of success is no higher than the chance of the disaster happening in the first place.
In the scenarios I have looked at, these conditions yield enough linear equations to pin down the measure of each branch, with no need to go through Markov chains. But the general case of multiple time travelers gets kinda hard to reason about. Maybe Markov chains can give a proof for that case as well?
Since each time-travel event forks the universe, with multiple time travelers it’s a question of whether the the second time-traveler is “fork-traveling” as well.