In this possible world, it is the case that “A” returns Y upon being given those same observations. But, the output of “A” when given those observations is a fixed computation, so you now need to reason about a possible world that is logically incoherent, given your knowledge that “A” in fact returns X. This possible world is, then, a logical counterfactual: a “possible world” that is logically incoherent.
Simpler solution: in that world, your code is instead A’, which is exactly like A, except that it returns Y in this situation. This is the more general solution derived from Pearl’s account of counterfactuals in domains with a finite number of variables (the “twin network construction”).
Last year, my colleagues and I published a paper on Turing-complete counterfactual models (“causal probabilistic programming”), which details how to do this, and even gives executable code to play with, as well as a formal semantics. Have a look at our predator-prey example, a fully worked example of how to do this “counterfactual world is same except blah” construction.
Yes, this is a specific way of doing policy-dependent source code, which minimizes how much the source code has to change to handle the counterfactual.
Haven’t looked deeply into the paper yet but the basic idea seems sound.
If the agent is ‘caused’ then in order for its source code to be different, something about the process that produced it must be different. (I haven’t seen this addressed.)
Simpler solution: in that world, your code is instead A’, which is exactly like A, except that it returns Y in this situation. This is the more general solution derived from Pearl’s account of counterfactuals in domains with a finite number of variables (the “twin network construction”).
Last year, my colleagues and I published a paper on Turing-complete counterfactual models (“causal probabilistic programming”), which details how to do this, and even gives executable code to play with, as well as a formal semantics. Have a look at our predator-prey example, a fully worked example of how to do this “counterfactual world is same except blah” construction.
http://www.zenna.org/publications/causal.pdf
Yes, this is a specific way of doing policy-dependent source code, which minimizes how much the source code has to change to handle the counterfactual.
Haven’t looked deeply into the paper yet but the basic idea seems sound.
If the agent is ‘caused’ then in order for its source code to be different, something about the process that produced it must be different. (I haven’t seen this addressed.)