The thing being proved is not the action, it’s an undefined propositional constant symbol. The action responds to the fact that the propositional symbol gets proved. See the example at the end of the second section, and keep track of the distinction between A and B.
I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:
They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).
It does seem that a consequentialist theory could prove what its agents’ actions are, if we somehow modify the axiom schema so that it doesn’t explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn’t be done seem to be gone, even if now there are new reasons for why this currently can’t be done.
The thing being proved is not the action, it’s an undefined propositional constant symbol. The action responds to the fact that the propositional symbol gets proved. See the example at the end of the second section, and keep track of the distinction between A and B.
Thanks, I understand now. The implications:
mislead me. I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:
And then there’s no Löbean issues. Cool! This agent can prove A=Ai without any problems. This should work great for ASP.
They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).
It does seem that a consequentialist theory could prove what its agents’ actions are, if we somehow modify the axiom schema so that it doesn’t explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn’t be done seem to be gone, even if now there are new reasons for why this currently can’t be done.