Upon further thought: there is no objective answer to “what you would do if 1 was even” or “what you would do if the 10001th digit of pi was even” (given your source code). The answer that Omega computes has to be more or less arbitrary, and depends on details of Omega’s source code. If you knew that Omega was going to logical-counterfactually mug you, and you knew Omega’s source code, and the reward is high enough, then you’d do whatever modifications are necessary on your own source code so that Omega would compute the “right” answer and reward you.
Therefore, if we include such problems in the problem class for which a decision algorithm should be reflectively consistent, then no decision algorithm is reflectively consistent.
ETA: Notice that in the version of CM with a physical coin, or with the n-th digit of pi where Omega is not computing what you would do if it was even or odd, but what you would do if you were told that it is even or odd, there is an objective answer to “what you would do if you were to receive the input ‘coin landed tails’” and “what you would do if you were to receive the input ’10000-th digit of pi is odd’”, which simply involves running your source code on the given input.
My understanding of the point of the post was that while a coin may physically land differently and thus instantiate the counterfactual, it is merely my current lack of knowledge (the “logical uncertainty” in the post title) that allows me to simulate a kind of pseudo-counterfactual in this case.
Since I do not know the millionth digit of pi, I can still speak meaningfully of the cases where it is and isn’t odd.
Upon further thought: there is no objective answer to “what you would do if 1 was even” or “what you would do if the 10001th digit of pi was even” (given your source code). The answer that Omega computes has to be more or less arbitrary, and depends on details of Omega’s source code. If you knew that Omega was going to logical-counterfactually mug you, and you knew Omega’s source code, and the reward is high enough, then you’d do whatever modifications are necessary on your own source code so that Omega would compute the “right” answer and reward you.
Therefore, if we include such problems in the problem class for which a decision algorithm should be reflectively consistent, then no decision algorithm is reflectively consistent.
ETA: Notice that in the version of CM with a physical coin, or with the n-th digit of pi where Omega is not computing what you would do if it was even or odd, but what you would do if you were told that it is even or odd, there is an objective answer to “what you would do if you were to receive the input ‘coin landed tails’” and “what you would do if you were to receive the input ’10000-th digit of pi is odd’”, which simply involves running your source code on the given input.
My understanding of the point of the post was that while a coin may physically land differently and thus instantiate the counterfactual, it is merely my current lack of knowledge (the “logical uncertainty” in the post title) that allows me to simulate a kind of pseudo-counterfactual in this case.
Since I do not know the millionth digit of pi, I can still speak meaningfully of the cases where it is and isn’t odd.
The 10001th digit of pi is 5.