The problem of logical counterfactuals is exactly about what it means to “replace all instances” of a subprogram in a larger program, when the subprogram can occur in multiple places, in modified but logically equivalent forms, etc.
I agree that that is a problem that both this approach to counterfactuals and the FDT logical-counterfactual approach share. The particular problem I was hoping this approach avoids was the one of assuming mutually-exclusive logical facts, such that all-but-one of them must necessarily be false, and the implications this has for the agent’s consistency and reasoning about its actions. Are you saying that they are the same problem, that the second problem is comparatively not worth solving, or something else?
I think my core issue with the above is the nature of the specification of the problem of “replacing A in B”. Allowing passing to an arbitrary equivalent program C before replacing exact instances of A seems overly permissive, and to allow in exactly the kind of principle-of-explosion issue that logical counterfactuals have. Suppose for instance that A and B both halt with a defined value, say A()=a. As framed above, I can take C to be a program that computes B′()+x(A()−a) (for some x), where B′ is the result of substituting all exact instances of A in B with any program returning a that is not identically A. Then C is (provably) equivalent to B, but if a≠1 I can ensure that the substituted version of C returns any value I like just by varying x. This doesn’t say anything interesting about the appearance of A in B - in fact, the replacement of B with B′ is exactly to ensure that the main substitution is a no-op.
I feel like a statement that more accurately corresponds to the substitution I’m trying to do above would be something like “There’s a program C that can be obtained from B by replacing subprograms that are provably equivalent to A with the constant program returning 1, such that C() = 2”. For one thing, this appears to have the useful property that if B makes no mention of A or anything equivalent to it, then I can’t stop such a C from being equivalent (in fact, identical) to B no matter what the value of A() is. Does this still have the same problem?
Let’s say the world is a grid of Game of Life cells evolving in discrete time. What counts as a “subprogram”?
Let’s further say that world contains embedded computers that run programs. Will your notion of “subprogram”, specified purely in terms of Game of Life cells, be strong enough to capture subprograms of these programs?
Again, I agree that the problem of identifying what logical structures (whereever they occur) count as implementing a particular function is a deep and interesting one, and not one that I am claiming to have solved. But again, I do not agree that it is a problem I have introduced? An FDT agent correctly inferring the downstream causal results of setting FDT(P––,G––)=a would, in general, have to identify FDT(P––,G––) being computed inside a Game of Life simulation, if and where such a calculation so occured.
While I am indeed interested in exploring the answers to your questions, I don’t see that they represent a specific challenge to the idea that the above notion of counterfactuals might be worth exploring further, in the way that your original claim would.
The problem of logical counterfactuals is exactly about what it means to “replace all instances” of a subprogram in a larger program, when the subprogram can occur in multiple places, in modified but logically equivalent forms, etc.
I agree that that is a problem that both this approach to counterfactuals and the FDT logical-counterfactual approach share. The particular problem I was hoping this approach avoids was the one of assuming mutually-exclusive logical facts, such that all-but-one of them must necessarily be false, and the implications this has for the agent’s consistency and reasoning about its actions. Are you saying that they are the same problem, that the second problem is comparatively not worth solving, or something else?
I’m saying they are the same problem. These sentences are logically equivalent:
“A() = 1 provably implies B() = 2”
“There’s a program C provably equivalent to B such that, if we replace all exact instances of A inside C with 1, the resulting program will return 2”
I think my core issue with the above is the nature of the specification of the problem of “replacing A in B”. Allowing passing to an arbitrary equivalent program C before replacing exact instances of A seems overly permissive, and to allow in exactly the kind of principle-of-explosion issue that logical counterfactuals have. Suppose for instance that A and B both halt with a defined value, say A()=a. As framed above, I can take C to be a program that computes B′()+x(A()−a) (for some x), where B′ is the result of substituting all exact instances of A in B with any program returning a that is not identically A. Then C is (provably) equivalent to B, but if a≠1 I can ensure that the substituted version of C returns any value I like just by varying x. This doesn’t say anything interesting about the appearance of A in B - in fact, the replacement of B with B′ is exactly to ensure that the main substitution is a no-op.
I feel like a statement that more accurately corresponds to the substitution I’m trying to do above would be something like “There’s a program C that can be obtained from B by replacing subprograms that are provably equivalent to A with the constant program returning 1, such that C() = 2”. For one thing, this appears to have the useful property that if B makes no mention of A or anything equivalent to it, then I can’t stop such a C from being equivalent (in fact, identical) to B no matter what the value of A() is. Does this still have the same problem?
Let’s say the world is a grid of Game of Life cells evolving in discrete time. What counts as a “subprogram”?
Let’s further say that world contains embedded computers that run programs. Will your notion of “subprogram”, specified purely in terms of Game of Life cells, be strong enough to capture subprograms of these programs?
Again, I agree that the problem of identifying what logical structures (whereever they occur) count as implementing a particular function is a deep and interesting one, and not one that I am claiming to have solved. But again, I do not agree that it is a problem I have introduced? An FDT agent correctly inferring the downstream causal results of setting FDT(P––,G––)=a would, in general, have to identify FDT(P––,G––) being computed inside a Game of Life simulation, if and where such a calculation so occured.
While I am indeed interested in exploring the answers to your questions, I don’t see that they represent a specific challenge to the idea that the above notion of counterfactuals might be worth exploring further, in the way that your original claim would.