Would you (or your ideal of rationality) still give $100 if I replace “10000th decimal digit of pi” with “the 10000th positive integer”, or with “the smallest non-negative integer”, or with just “0″?
Upon further thought: there is no objective answer to “what you would do if 1 was even” or “what you would do if the 10001th digit of pi was even” (given your source code). The answer that Omega computes has to be more or less arbitrary, and depends on details of Omega’s source code. If you knew that Omega was going to logical-counterfactually mug you, and you knew Omega’s source code, and the reward is high enough, then you’d do whatever modifications are necessary on your own source code so that Omega would compute the “right” answer and reward you.
Therefore, if we include such problems in the problem class for which a decision algorithm should be reflectively consistent, then no decision algorithm is reflectively consistent.
ETA: Notice that in the version of CM with a physical coin, or with the n-th digit of pi where Omega is not computing what you would do if it was even or odd, but what you would do if you were told that it is even or odd, there is an objective answer to “what you would do if you were to receive the input ‘coin landed tails’” and “what you would do if you were to receive the input ’10000-th digit of pi is odd’”, which simply involves running your source code on the given input.
My understanding of the point of the post was that while a coin may physically land differently and thus instantiate the counterfactual, it is merely my current lack of knowledge (the “logical uncertainty” in the post title) that allows me to simulate a kind of pseudo-counterfactual in this case.
Since I do not know the millionth digit of pi, I can still speak meaningfully of the cases where it is and isn’t odd.
The simplest case is when a fact that is being considered counterfactually is received from a given observation, so that you can explicitly say where the parameter is in the system, and use the dynamic specification of the system to see what happens to it depending on the parameter. That’s the case with the coin and random digit index.
10000th digit of pi is one step more complicated, but it’s still independent on most of your knowledge, so it’s conceptually easier to localize knowledge about it in your mind. Once you start considering the question, knowledge about its answer starts affecting your dynamic, and this influence can likewise be tracked to the source. That’s why I introduced Pi(n) as a local expression: all the knowledge in the algorithm about the answer to this question comes from this single procedure, so by varying its contents you can examine the impact of its different values of the future behavior.
Whether or not 1 is even is much more pervasive, so the surgery that changes it will be hard and not at all intuitively obvious. So, the disagreement seems to be that you trust your intuition about whether it’s possible to make 1 an even number in your mind, while I trust the generalization of idea that you can change whether the coin lands on one side or another, whether Pi(10000) is even or odd, and arbitrarily more pervasive questions as well.
This does depend a lot on what Omega understands by the question (how Omega’s algorithm logically depends on the question, and on your algorithm), which is related by my unwillingness to conclude that mutual cooperation is the clear-cut outcome of PD. In this thought experiment, this understanding is mostly specified, in other cases intuitive grasp of the problem won’t be enough.
If a theory of logical counterfactuals is to apply to statements of the form “If X was true, then Y would be true”, do we need to restrict the forms of X and Y, or can they be arbitrary mathematical propositions?
For example, does it make sense to ask something like, “What is 13*3, if 3*3 was 8?” An obvious answer is “38″, but what if you’re doing multiplication in binary?
I don’t see why a theory of counterfactuals couldn’t apply to mathematical propositions. After all, our cognitive architectures use causality at a primitive level, and the same architecture is taught math.
And certainly, while learning math, you were taught results that didn’t “seem” right at the time, so you worked backwards until you could understand why that result (like 2+6 = 8) makes sense.
So you just have to imagine yourself in such a similar situation about math, learning it for the first time. If everyone in class seemed to understand multiplication but you, and it were also a fact that 3*3 = 8, what process would you figure was actually going on when you multiply? Then, apply that to 13*3.
To this I ask: “Which 3*3?”. The whole procedure is something that is done with a description of program (system), and any facts of which we can speak as holding for the system are properties of the system’s “mind”. Thus, the fact of what 3*3 is must be located somewhere specifically (more generally, as a property), for it to be meaningful to talk about this fact in relation to the system. You are considering interaction between this fact, as parameter, and the rest of the system, and this activity requires seeing both on equal rights.
When you, as a human, reading the question, you may try to interpret it as pointing to a specific subsystem, as I did in the post. More generally, the question is only meaningful in this way if it admits such interpretation.
I think I sort of see what you mean. Perhaps this is an avenue worth exploring, given that we don’t seem to have many other suggestions on how to solve logical uncertainty. I’ll have to think on this more.
Would you (or your ideal of rationality) still give $100 if I replace “10000th decimal digit of pi” with “the 10000th positive integer”, or with “the smallest non-negative integer”, or with just “0″?
If not, what’s special about “10000th decimal digit of pi”? (Apparently you’re assuming that you can compute it in your head, so that’s not the difference.)
If yes, how do you (or Omega) compute a counterfactual where 0 is odd, or 1 is even?
Upon further thought: there is no objective answer to “what you would do if 1 was even” or “what you would do if the 10001th digit of pi was even” (given your source code). The answer that Omega computes has to be more or less arbitrary, and depends on details of Omega’s source code. If you knew that Omega was going to logical-counterfactually mug you, and you knew Omega’s source code, and the reward is high enough, then you’d do whatever modifications are necessary on your own source code so that Omega would compute the “right” answer and reward you.
Therefore, if we include such problems in the problem class for which a decision algorithm should be reflectively consistent, then no decision algorithm is reflectively consistent.
ETA: Notice that in the version of CM with a physical coin, or with the n-th digit of pi where Omega is not computing what you would do if it was even or odd, but what you would do if you were told that it is even or odd, there is an objective answer to “what you would do if you were to receive the input ‘coin landed tails’” and “what you would do if you were to receive the input ’10000-th digit of pi is odd’”, which simply involves running your source code on the given input.
My understanding of the point of the post was that while a coin may physically land differently and thus instantiate the counterfactual, it is merely my current lack of knowledge (the “logical uncertainty” in the post title) that allows me to simulate a kind of pseudo-counterfactual in this case.
Since I do not know the millionth digit of pi, I can still speak meaningfully of the cases where it is and isn’t odd.
The 10001th digit of pi is 5.
The simplest case is when a fact that is being considered counterfactually is received from a given observation, so that you can explicitly say where the parameter is in the system, and use the dynamic specification of the system to see what happens to it depending on the parameter. That’s the case with the coin and random digit index.
10000th digit of pi is one step more complicated, but it’s still independent on most of your knowledge, so it’s conceptually easier to localize knowledge about it in your mind. Once you start considering the question, knowledge about its answer starts affecting your dynamic, and this influence can likewise be tracked to the source. That’s why I introduced Pi(n) as a local expression: all the knowledge in the algorithm about the answer to this question comes from this single procedure, so by varying its contents you can examine the impact of its different values of the future behavior.
Whether or not 1 is even is much more pervasive, so the surgery that changes it will be hard and not at all intuitively obvious. So, the disagreement seems to be that you trust your intuition about whether it’s possible to make 1 an even number in your mind, while I trust the generalization of idea that you can change whether the coin lands on one side or another, whether Pi(10000) is even or odd, and arbitrarily more pervasive questions as well.
This does depend a lot on what Omega understands by the question (how Omega’s algorithm logically depends on the question, and on your algorithm), which is related by my unwillingness to conclude that mutual cooperation is the clear-cut outcome of PD. In this thought experiment, this understanding is mostly specified, in other cases intuitive grasp of the problem won’t be enough.
If a theory of logical counterfactuals is to apply to statements of the form “If X was true, then Y would be true”, do we need to restrict the forms of X and Y, or can they be arbitrary mathematical propositions?
For example, does it make sense to ask something like, “What is 13*3, if 3*3 was 8?” An obvious answer is “38″, but what if you’re doing multiplication in binary?
I don’t see why a theory of counterfactuals couldn’t apply to mathematical propositions. After all, our cognitive architectures use causality at a primitive level, and the same architecture is taught math.
And certainly, while learning math, you were taught results that didn’t “seem” right at the time, so you worked backwards until you could understand why that result (like 2+6 = 8) makes sense.
So you just have to imagine yourself in such a similar situation about math, learning it for the first time. If everyone in class seemed to understand multiplication but you, and it were also a fact that 3*3 = 8, what process would you figure was actually going on when you multiply? Then, apply that to 13*3.
To this I ask: “Which 3*3?”. The whole procedure is something that is done with a description of program (system), and any facts of which we can speak as holding for the system are properties of the system’s “mind”. Thus, the fact of what 3*3 is must be located somewhere specifically (more generally, as a property), for it to be meaningful to talk about this fact in relation to the system. You are considering interaction between this fact, as parameter, and the rest of the system, and this activity requires seeing both on equal rights.
When you, as a human, reading the question, you may try to interpret it as pointing to a specific subsystem, as I did in the post. More generally, the question is only meaningful in this way if it admits such interpretation.
I think I sort of see what you mean. Perhaps this is an avenue worth exploring, given that we don’t seem to have many other suggestions on how to solve logical uncertainty. I’ll have to think on this more.
The 10000th decimal digit of pi is 8, by the way (not counting the leading 3).