Above, action and utility are defined separately, with axioms that generally don’t refer to each other. Axioms that define action don’t define utility, and conversely. Moral arguments, on the other hand, define utility in terms of action. If we are sure that one of the moral arguments proved by the agent refers to the actual action (without knowing which one; if we have to choose an actual action based on that set of moral arguments, this condition holds by construction), then actual utility is defined by the axioms of action (the agent) and these moral arguments, without needing preference (axioms of utility).
I’m a little confused here. Here is my understanding so far:
The agent has an axiom-set S and rules of inference, which together define the agent’s theory. Given S and some computational constraints, the agent will deduce a certain set M of moral arguments. The moral arguments contain substrings of the form “U = U1”. The largest constant U1* appearing in such substrings of the moral arguments in M is, by definition, the actual utility. The definition of “actual utility” that I just wrote (which is in the metalanguage, not the agent’s language) is the preference.
In this sense, preference and actual utility are defined by all the axioms in S taken together. Similarly, the actual action is defined by all of S. So, what are you getting at when you talk about having one set of axioms define action while another set of axioms defines utility?
There is some background theory S the agent reasons with, say ZFC. This theory is extended by definitions to define action A and utility U. Say, these extensions consist of sets of axioms AX and UX. Then, the agent derives the set of moral arguments M from theory S+AX+UX. By preference, I refer specifically to UX, which defines utility U in the context of agent’s theory S. But if M is all (moral arguments) the agent will infer, then S+AX+M also defines U, just as well as S+AX+UX did. Thus, at that point, we can forget about UX and use M instead.
I’m not sure why you want to think in terms of S+AX+M instead of S+AX+UX, though. Doesn’t starting with the axiom set S union AX union UX better reflect how the agent actually reasons?
It does start with S+AX+UX, but it ends with essentially S+AX+M. This allows to understand the point of this activity better: by changing original axioms to equivalent ones, the agent expresses the initially separately defined outcome in terms of action, and uses that expression (dependence) to determine the outcome it prefers.
I’m a little confused here. Here is my understanding so far:
The agent has an axiom-set S and rules of inference, which together define the agent’s theory. Given S and some computational constraints, the agent will deduce a certain set M of moral arguments. The moral arguments contain substrings of the form “U = U1”. The largest constant U1* appearing in such substrings of the moral arguments in M is, by definition, the actual utility. The definition of “actual utility” that I just wrote (which is in the metalanguage, not the agent’s language) is the preference.
In this sense, preference and actual utility are defined by all the axioms in S taken together. Similarly, the actual action is defined by all of S. So, what are you getting at when you talk about having one set of axioms define action while another set of axioms defines utility?
There is some background theory S the agent reasons with, say ZFC. This theory is extended by definitions to define action A and utility U. Say, these extensions consist of sets of axioms AX and UX. Then, the agent derives the set of moral arguments M from theory S+AX+UX. By preference, I refer specifically to UX, which defines utility U in the context of agent’s theory S. But if M is all (moral arguments) the agent will infer, then S+AX+M also defines U, just as well as S+AX+UX did. Thus, at that point, we can forget about UX and use M instead.
Okay, thanks. This is clear.
I’m not sure why you want to think in terms of S+AX+M instead of S+AX+UX, though. Doesn’t starting with the axiom set S union AX union UX better reflect how the agent actually reasons?
It does start with S+AX+UX, but it ends with essentially S+AX+M. This allows to understand the point of this activity better: by changing original axioms to equivalent ones, the agent expresses the initially separately defined outcome in terms of action, and uses that expression (dependence) to determine the outcome it prefers.