Source code doesn’t entirely determine the result, inputs are also required.* Thus “logical counterfactuals” -reasoning about what a program will return if I input y? This can be done by asking ‘if I had input y instead of x’ or ‘if I input y’ even if I later decide to input x.
While it can be said that such considerations render one’s “output” conditional on logic, they remain entirely conditional on reasoning about a model, which may be incorrect. It seems more useful to refer to such a relation as conditional on one’s models/reasoning, or even processes in the world. A calculator may be misused—a 2 instead of a 3 here, hitting “=” one too many times, there, etc.
(Saying it is impossible for a rational agent that knows X to do Y, and agent A is not doing Y, does not establish that A is irrational—even if the premises are true, what follows is that A is not rational or does not know X.)
*Unless source code is defined as including the inputs.
Source code doesn’t entirely determine the result, inputs are also required.* Thus “logical counterfactuals” -reasoning about what a program will return if I input y? This can be done by asking ‘if I had input y instead of x’ or ‘if I input y’ even if I later decide to input x.
While it can be said that such considerations render one’s “output” conditional on logic, they remain entirely conditional on reasoning about a model, which may be incorrect. It seems more useful to refer to such a relation as conditional on one’s models/reasoning, or even processes in the world. A calculator may be misused—a 2 instead of a 3 here, hitting “=” one too many times, there, etc.
(Saying it is impossible for a rational agent that knows X to do Y, and agent A is not doing Y, does not establish that A is irrational—even if the premises are true, what follows is that A is not rational or does not know X.)
*Unless source code is defined as including the inputs.