Indeed. Consider a variant of the thought experiment where in the “actual” world you used a very reliable process, that’s only wrong 1 time in a trillion, while in the counterfactual you’re offered to control, you know only of an old calculator that is wrong 1 time in 10, and indicated a different answer from what you worked out. Updateless analysis says that you still have to go with old calculator’s result.
Knowledge seems to apply only to the event that produced it, even “logical” knowledge. Even if you prove something, you can’t be absolutely sure, so in the counterfactual you trust an old calculator instead of your proof. This would actually be a good variant of this thought experiment (“Counterfactual Proof”), interesting in its own right, by showing that “logical knowledge” has the same limitations, and perhaps further highlighting the nature of these limitations.
Do you build counterfactuals the Judea Pearl way, or some other way (for example the Gary Drescher way of chap. 5 “Good and Real”)? Or do you think our current formalisms do not “transfer” to handling logical uncertainty (i.e. are not good analogues of a theory of logical uncertainty)?
I don’t have a clear enough idea of the way I myself think about counterfactuals to compare. Pearl’s counterfactuals are philosophically unenlightening, they stop at explicit definitions, and I still haven’t systematically read Drescher’s book, only select passages.
The idea I use is that any counterfactual/event is a logically defined set (of possible worlds), equipped with necessary structures that allow reasoning about it or its subevents. The definition implies certain properties, such as its expected utility, the outcome, in a logically non-transparent way, and we can use these definitions to reason about dependence of outcome (expected utility, probability, etc.) on action-definition, query-replies, etc., through ambient control.
Pardon me if I repeat someone. Q causes the answer of the calculator, so if we set calculator’s answer counterfactually we lose dependency between Q and the calculator, and so we don’t have any knowledge of the counterfactual Q. Whereas if we had a formula R of comparable logical complexity to Q, drawn from a class of formula pairs with 90% correlation of values, then the dependency is bidirectional and counterfactually setting R we gain the knowledge about the counterfactual Q. Does “in the counterfactual you trust an old calculator instead of your proof” mean that you don’t agree (with this analysis)? (I have the impression that the problem statement drifted somewhat from “counterfactual” to a more “conditional” interpretation where we don’t sever any dependencies.)
Indeed. Consider a variant of the thought experiment where in the “actual” world you used a very reliable process, that’s only wrong 1 time in a trillion, while in the counterfactual you’re offered to control, you know only of an old calculator that is wrong 1 time in 10, and indicated a different answer from what you worked out. Updateless analysis says that you still have to go with old calculator’s result.
Knowledge seems to apply only to the event that produced it, even “logical” knowledge. Even if you prove something, you can’t be absolutely sure, so in the counterfactual you trust an old calculator instead of your proof. This would actually be a good variant of this thought experiment (“Counterfactual Proof”), interesting in its own right, by showing that “logical knowledge” has the same limitations, and perhaps further highlighting the nature of these limitations.
Do you build counterfactuals the Judea Pearl way, or some other way (for example the Gary Drescher way of chap. 5 “Good and Real”)? Or do you think our current formalisms do not “transfer” to handling logical uncertainty (i.e. are not good analogues of a theory of logical uncertainty)?
I don’t have a clear enough idea of the way I myself think about counterfactuals to compare. Pearl’s counterfactuals are philosophically unenlightening, they stop at explicit definitions, and I still haven’t systematically read Drescher’s book, only select passages.
The idea I use is that any counterfactual/event is a logically defined set (of possible worlds), equipped with necessary structures that allow reasoning about it or its subevents. The definition implies certain properties, such as its expected utility, the outcome, in a logically non-transparent way, and we can use these definitions to reason about dependence of outcome (expected utility, probability, etc.) on action-definition, query-replies, etc., through ambient control.
Pardon me if I repeat someone. Q causes the answer of the calculator, so if we set calculator’s answer counterfactually we lose dependency between Q and the calculator, and so we don’t have any knowledge of the counterfactual Q. Whereas if we had a formula R of comparable logical complexity to Q, drawn from a class of formula pairs with 90% correlation of values, then the dependency is bidirectional and counterfactually setting R we gain the knowledge about the counterfactual Q. Does “in the counterfactual you trust an old calculator instead of your proof” mean that you don’t agree (with this analysis)? (I have the impression that the problem statement drifted somewhat from “counterfactual” to a more “conditional” interpretation where we don’t sever any dependencies.)