Suppose you have just built a machine that allows you to see one day into the future. Suppose also that you are firmly committed to realizing the particular future that the machine will show you. So if you see that the lights in your workshop are on tomorrow, you will make sure to leave them on; if they are off, you will make sure to leave them off. If you find the furniture rearranged, you will rearrange the furniture. If there is a cow in your workshop, you will spend the next 24 hours getting a cow into your workshop.
My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?
Can’t answer until I know the laws of time travel.
No, seriously. Is the resulting universe randomly selected from all possible self-consistent ones? By what weighting? Does the resulting universe look like the result of iteration until a stable point is reached? And what about quantum branching?
Considering that all I know of causality and reality calls for non-circular causal graphs, I do feel a bit of justification in refusing to just hand out an answer.
Because it’s clear what the intended clarification of these experiments is, but less so for time travel. When the thought experiments are posed, the goal is not to find the answer to some question, but to understand the described situation, which might as well involve additionally specifying it.
I can’t imagine what you would want to know more about before giving an answer to Newcomb. Do you think Omega would have no choice but to use time travel?
No, but the mechanism Omega uses to predict my answer may be relevant to solving the problem. I have an old post about that. Also see the comment by Toby Ord there.
I could tell you that time travel works by exploiting closed time-like curves in general relativity, and that quantum effects haven’t been tested yet. But yes, that wouldn’t be telling you how to handle probabilities.
So, it looks like this is a situation where the prior you were born with is as good as any other.
Why am I firmly committed to realizing the future the machine shows? Do I believe that to be contrary would cause a paradox and explode the universe? Do I believe that I am destined to achieve whatever is foretold, and that it’ll be more pleasant if I do it on purpose instead of forcing fate to jury-rig something at the last minute? Do I think that it is only good and right that I do those things which are depicted, because it shows the locally best of all possible worlds?
In other words, what do I hypothetically think would happen if I weren’t fully committed to realizing the future shown?
Agree with the question of why you would be doing this; sounds like optimizing on the wrong thing. Supposing that it showed me having won the lottery and having a cow in my workshop, it seems silly to suppose that bringing a cow into my workshop will help me win the lottery. We can’t very well suppose that we were always wanting to have a cow in our workshop, else the vision of the future wouldn’t affect anything.
I stipulated that you’re committed to realizing the future because otherwise, the problem would be too easy.
I’m assuming that if you act contrary to what you see in the machine, fate will intervene. So if you’re committed to being contrary, we know something is going to occur to frustrate your efforts. Most likely, some emergency is going to occur soon which will keep you away from your workshop for the next 24 hours. This knowledge alone is a prior for what the future will hold.
My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?
Depends on the details of the counter-factual science. Does not depend on my firm commitment.
I was thinking of a closed time-like curve governed by general relativity, but I don’t think that tells you anything. It should depend on your commitment, though.
Here’s a puzzle that involves time travel:
Suppose you have just built a machine that allows you to see one day into the future. Suppose also that you are firmly committed to realizing the particular future that the machine will show you. So if you see that the lights in your workshop are on tomorrow, you will make sure to leave them on; if they are off, you will make sure to leave them off. If you find the furniture rearranged, you will rearrange the furniture. If there is a cow in your workshop, you will spend the next 24 hours getting a cow into your workshop.
My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?
Can’t answer until I know the laws of time travel.
No, seriously. Is the resulting universe randomly selected from all possible self-consistent ones? By what weighting? Does the resulting universe look like the result of iteration until a stable point is reached? And what about quantum branching?
Considering that all I know of causality and reality calls for non-circular causal graphs, I do feel a bit of justification in refusing to just hand out an answer.
Why is something like this an acceptable answer here, but not in Newcomb’s Problem or Counterfactual Mugging?
Because it’s clear what the intended clarification of these experiments is, but less so for time travel. When the thought experiments are posed, the goal is not to find the answer to some question, but to understand the described situation, which might as well involve additionally specifying it.
I can’t imagine what you would want to know more about before giving an answer to Newcomb. Do you think Omega would have no choice but to use time travel?
No, but the mechanism Omega uses to predict my answer may be relevant to solving the problem. I have an old post about that. Also see the comment by Toby Ord there.
Because these don’t involve time travel, but normal physics?
He did say “something like this”, not “this”.
I could tell you that time travel works by exploiting closed time-like curves in general relativity, and that quantum effects haven’t been tested yet. But yes, that wouldn’t be telling you how to handle probabilities.
So, it looks like this is a situation where the prior you were born with is as good as any other.
Why am I firmly committed to realizing the future the machine shows? Do I believe that to be contrary would cause a paradox and explode the universe? Do I believe that I am destined to achieve whatever is foretold, and that it’ll be more pleasant if I do it on purpose instead of forcing fate to jury-rig something at the last minute? Do I think that it is only good and right that I do those things which are depicted, because it shows the locally best of all possible worlds?
In other words, what do I hypothetically think would happen if I weren’t fully committed to realizing the future shown?
Agree with the question of why you would be doing this; sounds like optimizing on the wrong thing. Supposing that it showed me having won the lottery and having a cow in my workshop, it seems silly to suppose that bringing a cow into my workshop will help me win the lottery. We can’t very well suppose that we were always wanting to have a cow in our workshop, else the vision of the future wouldn’t affect anything.
I stipulated that you’re committed to realizing the future because otherwise, the problem would be too easy.
I’m assuming that if you act contrary to what you see in the machine, fate will intervene. So if you’re committed to being contrary, we know something is going to occur to frustrate your efforts. Most likely, some emergency is going to occur soon which will keep you away from your workshop for the next 24 hours. This knowledge alone is a prior for what the future will hold.
Depends on the details of the counter-factual science. Does not depend on my firm commitment.
I was thinking of a closed time-like curve governed by general relativity, but I don’t think that tells you anything. It should depend on your commitment, though.