Nope, it follows logically and probabilistically, but not causally—hence the difference.
Let T be the truck overturning, C be the Clippy making paperclips haphazardly, P being paperclips scattered on ground.
Given: T → P; C → P; P → probably(C); P → probably(T); C
Therefore, P. Therefore, probably T.
But it’s wrong, because what’s actually going on is a causal network of the form:
T → P ← C
P allows probabilistic inference to T and C, but their states become coupled.
In a similar way, P ⇔ (Q ⇔ P) is a lossy description of a decision theory that describes one party’s decision’s causal dependence on another’s. If you treat P ⇔ (Q ⇔ P) as an acausal statement, you can show its equivalence to Q, but it is not the same causal network.
Intuitively, acting based on someone’s disposition toward my disposition is different from deciding someone’s actions. If the parties give strong evidence of each other’s disposition, that has predictable results, in certain situations, but is still different from determining another’s output.
Nope, it follows logically and probabilistically, but not causally—hence the difference.
Let T be the truck overturning, C be the Clippy making paperclips haphazardly, P being paperclips scattered on ground.
Given: T → P; C → P; P → probably(C); P → probably(T); C
Therefore, P. Therefore, probably T.
Well, not to nitpick, but you originally wrote something more like P → maybe(C), P → maybe(T). But your conclusion had a “probably” in it, which is why I said that it didn’t follow.
Now, with your amended axioms, your conclusion does follow logically if you treat the arrow “->” as material implication. But it happens that your axioms are not in fact true of the circumstances that you’re imagining. You aren’t imagining that, in all cases, whenever there are paperclips on the ground, a paperclip truck probably overturned. However, if you axioms did apply, then it would be a valid, true, accurate, realistic inference to conclude that, if a Clippy just used up metal haphazardly, then a paperclip truck probably overturned.
But, in reality, and in the situation that you’re imagining, those axioms just don’t hold, at least not if “->” means material implication. However, they are a realistic setup if you treat “->” as an arrow in a causal diagram.
But this raises other questions. In a statement such as P ⇔ (Q ⇔ P), how am I to treat the “<=>”s as the arrows of a causal diagram? Wouldn’t that amount to having two-node causal loops? How do those work? Plus, P is exogenous, right? I’m using the decision theory to decide whether to make P true. In Pearl’s formalism, causal arrows don’t point to exogenous variables. Yet you have arrows point to P. How does that work?
Nope, it follows logically and probabilistically, but not causally—hence the difference.
Let T be the truck overturning, C be the Clippy making paperclips haphazardly, P being paperclips scattered on ground.
Given: T → P; C → P; P → probably(C); P → probably(T); C
Therefore, P. Therefore, probably T.
But it’s wrong, because what’s actually going on is a causal network of the form:
T → P ← C
P allows probabilistic inference to T and C, but their states become coupled.
In a similar way, P ⇔ (Q ⇔ P) is a lossy description of a decision theory that describes one party’s decision’s causal dependence on another’s. If you treat P ⇔ (Q ⇔ P) as an acausal statement, you can show its equivalence to Q, but it is not the same causal network.
Intuitively, acting based on someone’s disposition toward my disposition is different from deciding someone’s actions. If the parties give strong evidence of each other’s disposition, that has predictable results, in certain situations, but is still different from determining another’s output.
Well, not to nitpick, but you originally wrote something more like P → maybe(C), P → maybe(T). But your conclusion had a “probably” in it, which is why I said that it didn’t follow.
Now, with your amended axioms, your conclusion does follow logically if you treat the arrow “->” as material implication. But it happens that your axioms are not in fact true of the circumstances that you’re imagining. You aren’t imagining that, in all cases, whenever there are paperclips on the ground, a paperclip truck probably overturned. However, if you axioms did apply, then it would be a valid, true, accurate, realistic inference to conclude that, if a Clippy just used up metal haphazardly, then a paperclip truck probably overturned.
But, in reality, and in the situation that you’re imagining, those axioms just don’t hold, at least not if “->” means material implication. However, they are a realistic setup if you treat “->” as an arrow in a causal diagram.
But this raises other questions. In a statement such as P ⇔ (Q ⇔ P), how am I to treat the “<=>”s as the arrows of a causal diagram? Wouldn’t that amount to having two-node causal loops? How do those work? Plus, P is exogenous, right? I’m using the decision theory to decide whether to make P true. In Pearl’s formalism, causal arrows don’t point to exogenous variables. Yet you have arrows point to P. How does that work?