How do you get to Prf(S)=>S? It works with something more like [Prf(S)=>Q]=>S for non-obvious Q. This setup does have a problem with exploding after figuring out how to maximize utility, but it doesn’t seem to explode prior to that point (and I hope there might be a less perfectionist variant that gradually improves the situation without exploding at any point, instead of making a single leap to maximal utility value).
No, I mean, the agent actually performs the action when it proved it, right? So Prf(A=A1) implies that A will indeed be =A1. Assuming the agent has its own source, it will know that.
The thing being proved is not the action, it’s an undefined propositional constant symbol. The action responds to the fact that the propositional symbol gets proved. See the example at the end of the second section, and keep track of the distinction between A and B.
I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:
They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).
It does seem that a consequentialist theory could prove what its agents’ actions are, if we somehow modify the axiom schema so that it doesn’t explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn’t be done seem to be gone, even if now there are new reasons for why this currently can’t be done.
How do you get to Prf(S)=>S? It works with something more like [Prf(S)=>Q]=>S for non-obvious Q. This setup does have a problem with exploding after figuring out how to maximize utility, but it doesn’t seem to explode prior to that point (and I hope there might be a less perfectionist variant that gradually improves the situation without exploding at any point, instead of making a single leap to maximal utility value).
No, I mean, the agent actually performs the action when it proved it, right? So Prf(A=A1) implies that A will indeed be =A1. Assuming the agent has its own source, it will know that.
The thing being proved is not the action, it’s an undefined propositional constant symbol. The action responds to the fact that the propositional symbol gets proved. See the example at the end of the second section, and keep track of the distinction between A and B.
Thanks, I understand now. The implications:
mislead me. I thought they were another set of on-the-fly axioms, but if they are decision rules, this means something like:
And then there’s no Löbean issues. Cool! This agent can prove A=Ai without any problems. This should work great for ASP.
They are both, since triggering the moral axiom for B requires having those implications in the consequentialist theory (they are part of the definition of A, and A is part of the definition of U, so the theory knows them).
It does seem that a consequentialist theory could prove what its agents’ actions are, if we somehow modify the axiom schema so that it doesn’t explode as a result of proving the maximality of U following from the statements (like B) that trigger those actions. At least the old reasons for why this couldn’t be done seem to be gone, even if now there are new reasons for why this currently can’t be done.