Imagine this scenario happens 10000 times, with different formulae.
Given our prior, 5000 of the times the actual answer is even, and 5000 times the answer is odd.
In 4950 of the 5000 Q-is-even cases, the calculator says . And in the other 50 cases of Q-is-even, the calculator says . Then, in 4950 of the Q-is-odd cases, the calculator says and in 50 cases it says . Note that we still have 9900 cases of and 100 cases of .
Omega presents you with a counterfactual world that might be one of the 50 cases of Q-is-even, or one of the 4950 cases of Q-is-odd, . So you’re equally likely (5000:5000) to be in either scenario (Q-is-odd, Q-is-even) for actually writing down the right answer (as opposed to writing down the answer the calculator gave you).
I’m still not following. Either the answer is even in every possible world, or it is odd in every possible world. It can’t be legitimate to consider worlds where it is even and worlds where it is odd, as if they both actually existed.
Either the answer is even in every possible world, or it is odd in every possible world. It can’t be legitimate to consider worlds where it is even and worlds where it is odd, as if they both actually existed.
If you don’t know which is the case, considering such possibly impossible possible worlds is a standard tool. When you’re making a decision, all possible decisions except the actual one are actually impossible, but you still have to consider those possibilities, and infer their morally relevant high-level properties, in the course of coming to a decision. See, for example, Controlling Constant Programs.
Your initial read off your calculator tells you with 99% certainty.
Now Omega comes in and asks you to consider the opposite case. It matters how Omega decided what to say to you. If Omega was always going to contradict your calculator, then what Omega says offers no new information. But if Omega essentially had its own calculator, and was always going to tell you the result even if it didn’t contradict yours, then the probabilities become 50%.
True, but I’d like to jump in and say that you can still make a probability estimate with limited information—that’s the whole point of having probabilities, after all. If you had unlimited information it wouldn’t be much of a probability.
Given our prior, 5000 of the times the actual answer is even, and 5000 times the answer is odd.
In 4950 of the 5000 Q-is-even cases, the calculator says . And in the other 50 cases of Q-is-even, the calculator says . Then, in 4950 of the Q-is-odd cases, the calculator says and in 50 cases it says . Note that we still have 9900 cases of and 100 cases of .
Omega presents you with a counterfactual world that might be one of the 50 cases of Q-is-even, or one of the 4950 cases of Q-is-odd, . So you’re equally likely (5000:5000) to be in either scenario (Q-is-odd, Q-is-even) for actually writing down the right answer (as opposed to writing down the answer the calculator gave you).
I’m still not following. Either the answer is even in every possible world, or it is odd in every possible world. It can’t be legitimate to consider worlds where it is even and worlds where it is odd, as if they both actually existed.
If you don’t know which is the case, considering such possibly impossible possible worlds is a standard tool. When you’re making a decision, all possible decisions except the actual one are actually impossible, but you still have to consider those possibilities, and infer their morally relevant high-level properties, in the course of coming to a decision. See, for example, Controlling Constant Programs.
Which is the case? What do you do if you’re uncertain about which is the case?
Your initial read off your calculator tells you with 99% certainty.
Now Omega comes in and asks you to consider the opposite case. It matters how Omega decided what to say to you. If Omega was always going to contradict your calculator, then what Omega says offers no new information. But if Omega essentially had its own calculator, and was always going to tell you the result even if it didn’t contradict yours, then the probabilities become 50%.
True, but I’d like to jump in and say that you can still make a probability estimate with limited information—that’s the whole point of having probabilities, after all. If you had unlimited information it wouldn’t be much of a probability.