This is a dispute over the premises of the problem (whether Omega’s counterfactual is always different than yours, or is correct 99% of the time and independent of yours), not a dispute about how to solve the problem. The actual premise needs to be made clear before the question can be properly answered.
“I have no probability assignment, you haven’t told me your motives” is not an allowed answer. Pretend Omega holds a gun to your head and will fire unless you answer in ten seconds. There is always some information, I promise. You can avoid getting shot.
EDIT: Upon reflection, this post was too simplistic. If we have some prior information about Omega (e.g. can we ascribe human-like motives to it?), then we would have to use it in making our decision, which would add an element of apparent subjectivity. But I think it’s safe to make the simplifying assumption that we can’t say anything about Omega, to preserve the intent of the question.
If Omega doesn’t tell you what premises he’s using, then you will have some probability distribution over possible premises. However, that distribution, or the information that led to it, needs to be made explicit for the thought experiment to be useful.
If you assume that your prior is 50% that Omega’s counterfactual is always different than yours and 50% that it is independent, then updating on the fact that the counterfactual is different in this case gives you a posterior of 99% always different and 1% independent. This means that there is a (99%)^2 + 0.5% chance that your answer is right, and a 1.49% chance that your answer is wrong.
Interestingly, since the question is about math rather than a feature of the world, your answer should be the same for real life and the counterfactual, meaning that if you know that the counterfactual calculator is right mod 2 99% of the time and independent of yours, you should be indifferent to righting “even” or “odd” on your real-life paper.
Good point, but I think the following is wrong: “Interestingly, since the question is about math rather than a feature of the world, your answer should be the same for real life and the counterfactual.” This does not follow. The correct answer is the same, yes, but the best answer you can give depends on your state of knowledge, not on the unknown true answer. I would argue that you should give the best answer you can, since that’s the only way to give an answer at all.
The question wasn’t “What would you write if the calculator said odd?”. It was “Given that you already know your calculator says even, what answer would you like written down in the counterfactual in which the calculator said odd?”. This means that you are not obligated to ignore any evidence in either real life or the counterfactual, and the answers are the same in each. Therefor your probability distribution should be the same in regards to the answers in each.
Omega asks you “what is the true answer?” Vladimir asks you “what does Omega say in the counterfactual that your calculator returned odd?” Since Omega always writes the true answer at the end, the question is equivalent to “what is the true answer if your calculator returned odd?” Since the true answer is not affected by the calculator, this is further equivalent to “What is the true answer?”
So it’s possible we were just answering different questions.
Ah, hm, I missed that. I’d just assumed “determine” was meant in the other sense. So there’s no effect from being correct or incorrect? This post seems to get less interesting under more analysis.
Ah, wait! By “the answer” in your last sentence (in regards to the answers in each), did you mean the true answer, not your own answer? That would be much more… factually correct, though your second to last sentence still makes it sound like you’re counting fictional evidence.
Yes, I meant the true answer. And my point was that if Omega took the correct answer into account when creating the counterfactual, the evidence gained from the counterfactual is not fictional.
Imagine this scenario happens 10000 times, with different formulae.
Given our prior, 5000 of the times the actual answer is even, and 5000 times the answer is odd.
In 4950 of the 5000 Q-is-even cases, the calculator says . And in the other 50 cases of Q-is-even, the calculator says . Then, in 4950 of the Q-is-odd cases, the calculator says and in 50 cases it says . Note that we still have 9900 cases of and 100 cases of .
Omega presents you with a counterfactual world that might be one of the 50 cases of Q-is-even, or one of the 4950 cases of Q-is-odd, . So you’re equally likely (5000:5000) to be in either scenario (Q-is-odd, Q-is-even) for actually writing down the right answer (as opposed to writing down the answer the calculator gave you).
I’m still not following. Either the answer is even in every possible world, or it is odd in every possible world. It can’t be legitimate to consider worlds where it is even and worlds where it is odd, as if they both actually existed.
Either the answer is even in every possible world, or it is odd in every possible world. It can’t be legitimate to consider worlds where it is even and worlds where it is odd, as if they both actually existed.
If you don’t know which is the case, considering such possibly impossible possible worlds is a standard tool. When you’re making a decision, all possible decisions except the actual one are actually impossible, but you still have to consider those possibilities, and infer their morally relevant high-level properties, in the course of coming to a decision. See, for example, Controlling Constant Programs.
Your initial read off your calculator tells you with 99% certainty.
Now Omega comes in and asks you to consider the opposite case. It matters how Omega decided what to say to you. If Omega was always going to contradict your calculator, then what Omega says offers no new information. But if Omega essentially had its own calculator, and was always going to tell you the result even if it didn’t contradict yours, then the probabilities become 50%.
True, but I’d like to jump in and say that you can still make a probability estimate with limited information—that’s the whole point of having probabilities, after all. If you had unlimited information it wouldn’t be much of a probability.
So you are more likely to be in the first scenario.
Yes. You’ve most likely observed the correct answer, says observational knowledge. The argument in the parent comment doesn’t disagree with Nisan’s point.
I’m not following you.
Imagine this scenario happens 10000 times, with different formulae.
In 9900 of those cases, the calculator says , and Omega asks what the answer is if the calculator says .
In 100 of those cases, the calculator says , and Omega asks what the answer is if the calculator says .
So you are more likely to be in the first scenario.
This is a dispute over the premises of the problem (whether Omega’s counterfactual is always different than yours, or is correct 99% of the time and independent of yours), not a dispute about how to solve the problem. The actual premise needs to be made clear before the question can be properly answered.
“I have no probability assignment, you haven’t told me your motives” is not an allowed answer. Pretend Omega holds a gun to your head and will fire unless you answer in ten seconds. There is always some information, I promise. You can avoid getting shot.
EDIT: Upon reflection, this post was too simplistic. If we have some prior information about Omega (e.g. can we ascribe human-like motives to it?), then we would have to use it in making our decision, which would add an element of apparent subjectivity. But I think it’s safe to make the simplifying assumption that we can’t say anything about Omega, to preserve the intent of the question.
If Omega doesn’t tell you what premises he’s using, then you will have some probability distribution over possible premises. However, that distribution, or the information that led to it, needs to be made explicit for the thought experiment to be useful.
If you assume that your prior is 50% that Omega’s counterfactual is always different than yours and 50% that it is independent, then updating on the fact that the counterfactual is different in this case gives you a posterior of 99% always different and 1% independent. This means that there is a (99%)^2 + 0.5% chance that your answer is right, and a 1.49% chance that your answer is wrong.
Interestingly, since the question is about math rather than a feature of the world, your answer should be the same for real life and the counterfactual, meaning that if you know that the counterfactual calculator is right mod 2 99% of the time and independent of yours, you should be indifferent to righting “even” or “odd” on your real-life paper.
Good point, but I think the following is wrong: “Interestingly, since the question is about math rather than a feature of the world, your answer should be the same for real life and the counterfactual.” This does not follow. The correct answer is the same, yes, but the best answer you can give depends on your state of knowledge, not on the unknown true answer. I would argue that you should give the best answer you can, since that’s the only way to give an answer at all.
The question wasn’t “What would you write if the calculator said odd?”. It was “Given that you already know your calculator says even, what answer would you like written down in the counterfactual in which the calculator said odd?”. This means that you are not obligated to ignore any evidence in either real life or the counterfactual, and the answers are the same in each. Therefor your probability distribution should be the same in regards to the answers in each.
Omega asks you “what is the true answer?” Vladimir asks you “what does Omega say in the counterfactual that your calculator returned odd?” Since Omega always writes the true answer at the end, the question is equivalent to “what is the true answer if your calculator returned odd?” Since the true answer is not affected by the calculator, this is further equivalent to “What is the true answer?”
So it’s possible we were just answering different questions.
No, in this problem Omega writes whatever you tell Omega to write, whether it’s true or not. (Apparently Omega does not consider that a lie)
Ah, hm, I missed that. I’d just assumed “determine” was meant in the other sense. So there’s no effect from being correct or incorrect? This post seems to get less interesting under more analysis.
“What is the true answer?” is the question I was trying to answer. What question are you trying to answer?
The same.
Ah, wait! By “the answer” in your last sentence (in regards to the answers in each), did you mean the true answer, not your own answer? That would be much more… factually correct, though your second to last sentence still makes it sound like you’re counting fictional evidence.
Yes, I meant the true answer. And my point was that if Omega took the correct answer into account when creating the counterfactual, the evidence gained from the counterfactual is not fictional.
Yay! It looks like I’ve managed to understand you then.
Given our prior, 5000 of the times the actual answer is even, and 5000 times the answer is odd.
In 4950 of the 5000 Q-is-even cases, the calculator says . And in the other 50 cases of Q-is-even, the calculator says . Then, in 4950 of the Q-is-odd cases, the calculator says and in 50 cases it says . Note that we still have 9900 cases of and 100 cases of .
Omega presents you with a counterfactual world that might be one of the 50 cases of Q-is-even, or one of the 4950 cases of Q-is-odd, . So you’re equally likely (5000:5000) to be in either scenario (Q-is-odd, Q-is-even) for actually writing down the right answer (as opposed to writing down the answer the calculator gave you).
I’m still not following. Either the answer is even in every possible world, or it is odd in every possible world. It can’t be legitimate to consider worlds where it is even and worlds where it is odd, as if they both actually existed.
If you don’t know which is the case, considering such possibly impossible possible worlds is a standard tool. When you’re making a decision, all possible decisions except the actual one are actually impossible, but you still have to consider those possibilities, and infer their morally relevant high-level properties, in the course of coming to a decision. See, for example, Controlling Constant Programs.
Which is the case? What do you do if you’re uncertain about which is the case?
Your initial read off your calculator tells you with 99% certainty.
Now Omega comes in and asks you to consider the opposite case. It matters how Omega decided what to say to you. If Omega was always going to contradict your calculator, then what Omega says offers no new information. But if Omega essentially had its own calculator, and was always going to tell you the result even if it didn’t contradict yours, then the probabilities become 50%.
True, but I’d like to jump in and say that you can still make a probability estimate with limited information—that’s the whole point of having probabilities, after all. If you had unlimited information it wouldn’t be much of a probability.
Yes. You’ve most likely observed the correct answer, says observational knowledge. The argument in the parent comment doesn’t disagree with Nisan’s point.