The simplest case is when a fact that is being considered counterfactually is received from a given observation, so that you can explicitly say where the parameter is in the system, and use the dynamic specification of the system to see what happens to it depending on the parameter. That’s the case with the coin and random digit index.
10000th digit of pi is one step more complicated, but it’s still independent on most of your knowledge, so it’s conceptually easier to localize knowledge about it in your mind. Once you start considering the question, knowledge about its answer starts affecting your dynamic, and this influence can likewise be tracked to the source. That’s why I introduced Pi(n) as a local expression: all the knowledge in the algorithm about the answer to this question comes from this single procedure, so by varying its contents you can examine the impact of its different values of the future behavior.
Whether or not 1 is even is much more pervasive, so the surgery that changes it will be hard and not at all intuitively obvious. So, the disagreement seems to be that you trust your intuition about whether it’s possible to make 1 an even number in your mind, while I trust the generalization of idea that you can change whether the coin lands on one side or another, whether Pi(10000) is even or odd, and arbitrarily more pervasive questions as well.
This does depend a lot on what Omega understands by the question (how Omega’s algorithm logically depends on the question, and on your algorithm), which is related by my unwillingness to conclude that mutual cooperation is the clear-cut outcome of PD. In this thought experiment, this understanding is mostly specified, in other cases intuitive grasp of the problem won’t be enough.
If a theory of logical counterfactuals is to apply to statements of the form “If X was true, then Y would be true”, do we need to restrict the forms of X and Y, or can they be arbitrary mathematical propositions?
For example, does it make sense to ask something like, “What is 13*3, if 3*3 was 8?” An obvious answer is “38″, but what if you’re doing multiplication in binary?
I don’t see why a theory of counterfactuals couldn’t apply to mathematical propositions. After all, our cognitive architectures use causality at a primitive level, and the same architecture is taught math.
And certainly, while learning math, you were taught results that didn’t “seem” right at the time, so you worked backwards until you could understand why that result (like 2+6 = 8) makes sense.
So you just have to imagine yourself in such a similar situation about math, learning it for the first time. If everyone in class seemed to understand multiplication but you, and it were also a fact that 3*3 = 8, what process would you figure was actually going on when you multiply? Then, apply that to 13*3.
To this I ask: “Which 3*3?”. The whole procedure is something that is done with a description of program (system), and any facts of which we can speak as holding for the system are properties of the system’s “mind”. Thus, the fact of what 3*3 is must be located somewhere specifically (more generally, as a property), for it to be meaningful to talk about this fact in relation to the system. You are considering interaction between this fact, as parameter, and the rest of the system, and this activity requires seeing both on equal rights.
When you, as a human, reading the question, you may try to interpret it as pointing to a specific subsystem, as I did in the post. More generally, the question is only meaningful in this way if it admits such interpretation.
I think I sort of see what you mean. Perhaps this is an avenue worth exploring, given that we don’t seem to have many other suggestions on how to solve logical uncertainty. I’ll have to think on this more.
The simplest case is when a fact that is being considered counterfactually is received from a given observation, so that you can explicitly say where the parameter is in the system, and use the dynamic specification of the system to see what happens to it depending on the parameter. That’s the case with the coin and random digit index.
10000th digit of pi is one step more complicated, but it’s still independent on most of your knowledge, so it’s conceptually easier to localize knowledge about it in your mind. Once you start considering the question, knowledge about its answer starts affecting your dynamic, and this influence can likewise be tracked to the source. That’s why I introduced Pi(n) as a local expression: all the knowledge in the algorithm about the answer to this question comes from this single procedure, so by varying its contents you can examine the impact of its different values of the future behavior.
Whether or not 1 is even is much more pervasive, so the surgery that changes it will be hard and not at all intuitively obvious. So, the disagreement seems to be that you trust your intuition about whether it’s possible to make 1 an even number in your mind, while I trust the generalization of idea that you can change whether the coin lands on one side or another, whether Pi(10000) is even or odd, and arbitrarily more pervasive questions as well.
This does depend a lot on what Omega understands by the question (how Omega’s algorithm logically depends on the question, and on your algorithm), which is related by my unwillingness to conclude that mutual cooperation is the clear-cut outcome of PD. In this thought experiment, this understanding is mostly specified, in other cases intuitive grasp of the problem won’t be enough.
If a theory of logical counterfactuals is to apply to statements of the form “If X was true, then Y would be true”, do we need to restrict the forms of X and Y, or can they be arbitrary mathematical propositions?
For example, does it make sense to ask something like, “What is 13*3, if 3*3 was 8?” An obvious answer is “38″, but what if you’re doing multiplication in binary?
I don’t see why a theory of counterfactuals couldn’t apply to mathematical propositions. After all, our cognitive architectures use causality at a primitive level, and the same architecture is taught math.
And certainly, while learning math, you were taught results that didn’t “seem” right at the time, so you worked backwards until you could understand why that result (like 2+6 = 8) makes sense.
So you just have to imagine yourself in such a similar situation about math, learning it for the first time. If everyone in class seemed to understand multiplication but you, and it were also a fact that 3*3 = 8, what process would you figure was actually going on when you multiply? Then, apply that to 13*3.
To this I ask: “Which 3*3?”. The whole procedure is something that is done with a description of program (system), and any facts of which we can speak as holding for the system are properties of the system’s “mind”. Thus, the fact of what 3*3 is must be located somewhere specifically (more generally, as a property), for it to be meaningful to talk about this fact in relation to the system. You are considering interaction between this fact, as parameter, and the rest of the system, and this activity requires seeing both on equal rights.
When you, as a human, reading the question, you may try to interpret it as pointing to a specific subsystem, as I did in the post. More generally, the question is only meaningful in this way if it admits such interpretation.
I think I sort of see what you mean. Perhaps this is an avenue worth exploring, given that we don’t seem to have many other suggestions on how to solve logical uncertainty. I’ll have to think on this more.