They are logically incoherent in themselves though. Suppose the agent’s source code is “A”. Suppose that in fact, A returns action X. Consider a logical counterfactual “possible world” where A returns action Y. In this logical counterfactual, it is possible to deduce a contradiction: A returns X (by computation/logic) and returns Y (by assumption) and X is not equal to Y. Hence by the principle of explosion, everything is true.
It isn’t necessary to observe that A returns X in real life, it can be deduced from logic.
(Note that this doesn’t exclude the logical material conditionals described in the post, only logical counterfactuals)
Source code doesn’t entirely determine the result, inputs are also required.* Thus “logical counterfactuals” -reasoning about what a program will return if I input y? This can be done by asking ‘if I had input y instead of x’ or ‘if I input y’ even if I later decide to input x.
While it can be said that such considerations render one’s “output” conditional on logic, they remain entirely conditional on reasoning about a model, which may be incorrect. It seems more useful to refer to such a relation as conditional on one’s models/reasoning, or even processes in the world. A calculator may be misused—a 2 instead of a 3 here, hitting “=” one too many times, there, etc.
(Saying it is impossible for a rational agent that knows X to do Y, and agent A is not doing Y, does not establish that A is irrational—even if the premises are true, what follows is that A is not rational or does not know X.)
*Unless source code is defined as including the inputs.
You are assuming a very strong set of conditions..that determinism holds,that the agent has perfect knowledge of its source code, and that it is compelled to consider hypothetical situations in maximum resolution.
Those are the conditions in which logical counterfactuals are most well-motivated. If there isn’t determinism or known source code then there isn’t an obvious reason to be considering impossible possible worlds.
Those are the conditions under which counterfactuals are flat out impossible. But we have plenty of motivation to consider hypotheticals ,and we don’t generally know how possible they are
They are logically incoherent in themselves though. Suppose the agent’s source code is “A”. Suppose that in fact, A returns action X. Consider a logical counterfactual “possible world” where A returns action Y. In this logical counterfactual, it is possible to deduce a contradiction: A returns X (by computation/logic) and returns Y (by assumption) and X is not equal to Y. Hence by the principle of explosion, everything is true.
It isn’t necessary to observe that A returns X in real life, it can be deduced from logic.
(Note that this doesn’t exclude the logical material conditionals described in the post, only logical counterfactuals)
Source code doesn’t entirely determine the result, inputs are also required.* Thus “logical counterfactuals” -reasoning about what a program will return if I input y? This can be done by asking ‘if I had input y instead of x’ or ‘if I input y’ even if I later decide to input x.
While it can be said that such considerations render one’s “output” conditional on logic, they remain entirely conditional on reasoning about a model, which may be incorrect. It seems more useful to refer to such a relation as conditional on one’s models/reasoning, or even processes in the world. A calculator may be misused—a 2 instead of a 3 here, hitting “=” one too many times, there, etc.
(Saying it is impossible for a rational agent that knows X to do Y, and agent A is not doing Y, does not establish that A is irrational—even if the premises are true, what follows is that A is not rational or does not know X.)
*Unless source code is defined as including the inputs.
You are assuming a very strong set of conditions..that determinism holds,that the agent has perfect knowledge of its source code, and that it is compelled to consider hypothetical situations in maximum resolution.
Those are the conditions in which logical counterfactuals are most well-motivated. If there isn’t determinism or known source code then there isn’t an obvious reason to be considering impossible possible worlds.
Those are the conditions under which counterfactuals are flat out impossible. But we have plenty of motivation to consider hypotheticals ,and we don’t generally know how possible they are