This is unlike the situation with A and O, where the agent can’t just perform action A, since it’s not defined in the way the agent knows how to perform (even though A is (provably) equivalent to one of the constants, the agent can’t prove that for any given constant).
It’s probably a good idea to maintain the distinction between a constant symbol c and the element v(c) assigned to c by the interpretation’s valuation map v into the domain of discourse.
For example, I found the quote above confusing, but I think that you meant the following: “This is unlike the situation with A and O, where the agent can’t just perform action v(A), since it’s not defined in the way the agent knows how to perform. It is true that we can prove, in the metalanguage, that there exists an action X such that v(A) = X. However, there is no action X such that, for some constant symbol ‘X’ such that v(‘X’) = X, the agent can prove [A = ‘X’].”
Not what I meant. The set of possible actions is defined syntactically, as a set of formulas that the agent, from outside its theory, can recognize and directly act upon. Definition of A (as it’s syntactically given) is not one of those. Thus, the agent can’t perform A directly, the best it can hope for is to find another formula B which defines the same value (in all models) and is a possible action. The agent stops short of this goal in proving a moral argument involving A and B instead, [A=B ⇒ U=large], and enacts this moral argument by performing B, which is a possible action (as a formula), unlike A. The agent, however, can’t prove [A=B], even though [A=B] is provable in agent’s theory (see the first named section).
Not what I meant. The set of possible actions is defined syntactically, as a set of formulas that the agent, from outside its theory, can recognize and directly act upon. Definition of A (as it’s syntactically given) is not one of those. Thus, the agent can’t perform A directly, the best it can hope for is to find another formula B which defines the same value (in all models) and is a possible action. The agent stops short of this goal in proving a moral argument involving A and B instead, [A=B ⇒ U=large], and enacts this moral argument by performing B, which is a possible action (as a formula), unlike A.
This looks to me like an explanation for why my original interpretation of your quote is a true statement. So I’m worried that I’m still misunderstanding you, since you say that my interpretation is not what you meant.
Here is my interpretation again, but in more syntactic terms:
“This is unlike the situation with A and O, where the agent can’t just perform action v(A), since it’s not defined in the way the agent knows how to perform. It is true that we can prove that, in every interpretation v, there is an action-constant X such that v(A) = v(X). However, there is no action-constant X such that the agent can prove [A = X].”
The rest of your parent comment explains why the symbol A will never appear in the position in moral arguments where action-constant-symbols appear. Is that right?
While I don’t disagree with what you are saying in your reformulation (but for insignificant details), it’s a different statement from the one I was making. In my own words, you are stating that the agent can’t prove A=X for any (constant denoting) possible action X, but I’m not stating that at all: I’m only saying that A itself is not a possible action, that is as a formula is not an element of the set of formulas that are possible actions. I also don’t see why you’d want that v(-) thing in this context: the agent performs an action by examining formulas for possible actions “as text strings”, not by magically perceiving their semantics.
I also don’t see why you’d want that v(-) thing in this context: the agent performs an action by examining formulas for possible actions “as text strings”, not by magically perceiving their semantics.
It’s how I help myself to keep the map and the territory distinct. v(A), under the standard interpretation, is what the agent does. The constant A, on the other hand, is a symbol that the agent uses in its reasoning, and which isn’t even defined in such a way that the agent can directly perform what it represents.
The valuation v is for my benefit, not the agent’s. The agent doesn’t use or perceive the semantics of its theory. But I do perceive the semantics when I reason about how the agent’s reasoning will effect its actions.
What does standard interpretation have to do with this? If v(-) maps from formulas to actions, fine, but then A is just a string, so interpretations don’t matter.
I think that I’m missing your point. Of course, the interpretation doesn’t affect what the agent or its theory can prove. Is that all you’re saying?
The reason that I’m led to think in terms of semantics is that your post appeals to properties of the agent that aren’t necessarily encoded in the agent’s theory. At least, the current post doesn’t explicitly say that these properties are encoded in the theory. (Maybe you made it clear how this works in one of your previous post. I haven’t read all of these closely.)
The properties I’m thinking of are (1) the agent’s computational constraints and (2) the fact that the agent actually does the action represented by the action-constant that yields the highest computed utility, rather than merely deducing that that constant has the highest computed utility.
For example, you claim that [A=1] must be derivable in the theory if the agent actually does A. The form of your argument, as I understand it, is to note that [A=1] is true in the standard interpretation, and to show that [A=1] is the sort of formula which, if true under one interpretation, must be true in all, so that [A=1] must be a theorem by completeness. I’m still working out why [A=1] is the required kind of formula, but the form of your argument does seem to appeal to a particular interpretation before generalizing to the rest.
For example, you claim that [A=1] must be derivable in the theory if the agent actually does A.
If the agent actually does 1 (I assume you meant to say). I don’t see what you are trying to say again. I agree with the last paragraph (you could recast the argument that way), but don’t understand the third paragraph.
If the agent actually does 1 (I assume you meant to say).
Whoops. Right.
I don’t see what you are trying to say again. I agree with the last paragraph (you could recast the argument that way), but don’t understand the third paragraph.
Okay. Let me try to make my point by building on the last paragraph, then. According to my understanding, you start out knowing that v(A) = v(1) for a particular interpretation v. Then you infer that v’(A) = v’(1) for an arbitrary interpretation v’. Part of my reason for using the v(.) symbol is to help myself keep the stages of this argument distinct.
According to my understanding, you start out knowing that v(A) = v(1) for a particular interpretation v.
If v is an interpretation, it maps (all) terms to elements of corresponding universe, while possible actions are only some formulas, so associated mapping K would map some formulas to the set of actions (which don’t have to do anything with any universe). So, we could say that K(1)=1′, but K(A) is undefined. K is not an interpretation.
If v is an interpretation, it maps (all) terms to elements of corresponding universe, while possible actions are only some formulas, . . .
Maybe we’re not using the terminology in exactly the same way.
For me, an interpretation of a theory is an ordered pair (D, v), where D is a set (the domain of discourse), and v is a map (the valuation map) satisfying certain conditions. In particular, D is the codomain of v restricted to the constant symbols, so v actually contains everything needed to recover the interpretation. For this reason, I sometimes abuse notation and call v itself the interpretation.
The valuation map v
maps constant symbols to elements of D,
maps n-ary function symbols to maps from D^n to D,
maps n-ary predicate symbols to subsets of D^n,
maps sentences of the theory into {T, F}, in a way that satisfies some recursive rules coming from the rules of inference.
Now, in the post, you write
Each such statements defines a possible world Y resulting from a possible action X. X and Y can be thought of as constants, just like A and O, or as formulas that define these constants, so that the moral arguments take the form [X(A) ⇒ Y(O)].
(Emphasis added.) I’ve been working with the bolded option, which I understand to be saying that A and 1 are constant symbols. Hence, given an interpretation (D, v), v(A) and v(1) are elements of D, so we can ask whether they are the same elements.
K has a very small domain. Say, K(“2+2”)=K(“5″)=”pull the second lever”, K(“4”) undefined, K(“A”) undefined. Your v doesn’t appear to be similarly restricted.
It’s probably a good idea to maintain the distinction between a constant symbol c and the element v(c) assigned to c by the interpretation’s valuation map v into the domain of discourse.
For example, I found the quote above confusing, but I think that you meant the following: “This is unlike the situation with A and O, where the agent can’t just perform action v(A), since it’s not defined in the way the agent knows how to perform. It is true that we can prove, in the metalanguage, that there exists an action X such that v(A) = X. However, there is no action X such that, for some constant symbol ‘X’ such that v(‘X’) = X, the agent can prove [A = ‘X’].”
Not what I meant. The set of possible actions is defined syntactically, as a set of formulas that the agent, from outside its theory, can recognize and directly act upon. Definition of A (as it’s syntactically given) is not one of those. Thus, the agent can’t perform A directly, the best it can hope for is to find another formula B which defines the same value (in all models) and is a possible action. The agent stops short of this goal in proving a moral argument involving A and B instead, [A=B ⇒ U=large], and enacts this moral argument by performing B, which is a possible action (as a formula), unlike A. The agent, however, can’t prove [A=B], even though [A=B] is provable in agent’s theory (see the first named section).
This looks to me like an explanation for why my original interpretation of your quote is a true statement. So I’m worried that I’m still misunderstanding you, since you say that my interpretation is not what you meant.
Here is my interpretation again, but in more syntactic terms:
“This is unlike the situation with A and O, where the agent can’t just perform action v(A), since it’s not defined in the way the agent knows how to perform. It is true that we can prove that, in every interpretation v, there is an action-constant X such that v(A) = v(X). However, there is no action-constant X such that the agent can prove [A = X].”
The rest of your parent comment explains why the symbol A will never appear in the position in moral arguments where action-constant-symbols appear. Is that right?
While I don’t disagree with what you are saying in your reformulation (but for insignificant details), it’s a different statement from the one I was making. In my own words, you are stating that the agent can’t prove A=X for any (constant denoting) possible action X, but I’m not stating that at all: I’m only saying that A itself is not a possible action, that is as a formula is not an element of the set of formulas that are possible actions. I also don’t see why you’d want that v(-) thing in this context: the agent performs an action by examining formulas for possible actions “as text strings”, not by magically perceiving their semantics.
It’s how I help myself to keep the map and the territory distinct. v(A), under the standard interpretation, is what the agent does. The constant A, on the other hand, is a symbol that the agent uses in its reasoning, and which isn’t even defined in such a way that the agent can directly perform what it represents.
The valuation v is for my benefit, not the agent’s. The agent doesn’t use or perceive the semantics of its theory. But I do perceive the semantics when I reason about how the agent’s reasoning will effect its actions.
What does standard interpretation have to do with this? If v(-) maps from formulas to actions, fine, but then A is just a string, so interpretations don’t matter.
I think that I’m missing your point. Of course, the interpretation doesn’t affect what the agent or its theory can prove. Is that all you’re saying?
The reason that I’m led to think in terms of semantics is that your post appeals to properties of the agent that aren’t necessarily encoded in the agent’s theory. At least, the current post doesn’t explicitly say that these properties are encoded in the theory. (Maybe you made it clear how this works in one of your previous post. I haven’t read all of these closely.)
The properties I’m thinking of are (1) the agent’s computational constraints and (2) the fact that the agent actually does the action represented by the action-constant that yields the highest computed utility, rather than merely deducing that that constant has the highest computed utility.
For example, you claim that [A=1] must be derivable in the theory if the agent actually does A. The form of your argument, as I understand it, is to note that [A=1] is true in the standard interpretation, and to show that [A=1] is the sort of formula which, if true under one interpretation, must be true in all, so that [A=1] must be a theorem by completeness. I’m still working out why [A=1] is the required kind of formula, but the form of your argument does seem to appeal to a particular interpretation before generalizing to the rest.
If the agent actually does 1 (I assume you meant to say). I don’t see what you are trying to say again. I agree with the last paragraph (you could recast the argument that way), but don’t understand the third paragraph.
Whoops. Right.
Okay. Let me try to make my point by building on the last paragraph, then. According to my understanding, you start out knowing that v(A) = v(1) for a particular interpretation v. Then you infer that v’(A) = v’(1) for an arbitrary interpretation v’. Part of my reason for using the v(.) symbol is to help myself keep the stages of this argument distinct.
If v is an interpretation, it maps (all) terms to elements of corresponding universe, while possible actions are only some formulas, so associated mapping K would map some formulas to the set of actions (which don’t have to do anything with any universe). So, we could say that K(1)=1′, but K(A) is undefined. K is not an interpretation.
Maybe we’re not using the terminology in exactly the same way.
For me, an interpretation of a theory is an ordered pair (D, v), where D is a set (the domain of discourse), and v is a map (the valuation map) satisfying certain conditions. In particular, D is the codomain of v restricted to the constant symbols, so v actually contains everything needed to recover the interpretation. For this reason, I sometimes abuse notation and call v itself the interpretation.
The valuation map v
maps constant symbols to elements of D,
maps n-ary function symbols to maps from D^n to D,
maps n-ary predicate symbols to subsets of D^n,
maps sentences of the theory into {T, F}, in a way that satisfies some recursive rules coming from the rules of inference.
Now, in the post, you write
(Emphasis added.) I’ve been working with the bolded option, which I understand to be saying that A and 1 are constant symbols. Hence, given an interpretation (D, v), v(A) and v(1) are elements of D, so we can ask whether they are the same elements.
I agree with everything you wrote here...
What was your “associated mapping K”? I took it to be what I’m calling the valuation map v. That’s the only map that I associate to an interpretation.
K has a very small domain. Say, K(“2+2”)=K(“5″)=”pull the second lever”, K(“4”) undefined, K(“A”) undefined. Your v doesn’t appear to be similarly restricted.