This implicitly contains a sort of chicken rule, since if the agent can prove that it will not take a particular →a, it can proceed to prove arbitrarily good (→a,→o) for that →a. So, it will want to take that action.
I wouldn’t really call this logical causality, by the way; to me, that suggests the ability to take arbitrarily mathematical counterfactuals (“What if π=3?”), not only counterfactuals in service of actions.
This implicitly contains a sort of chicken rule, since if the agent can prove that it will not take a particular →a, it can proceed to prove arbitrarily good (→a,→o) for that →a. So, it will want to take that action.
I wouldn’t really call this logical causality, by the way; to me, that suggests the ability to take arbitrarily mathematical counterfactuals (“What if π=3?”), not only counterfactuals in service of actions.