I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:
They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).
It does seem that a consequentialist theory could prove what its agents’ actions are, if we somehow modify the axiom schema so that it doesn’t explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn’t be done seem to be gone, even if now there are new reasons for why this currently can’t be done.
They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).
It does seem that a consequentialist theory could prove what its agents’ actions are, if we somehow modify the axiom schema so that it doesn’t explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn’t be done seem to be gone, even if now there are new reasons for why this currently can’t be done.