In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
You can think of me as trying to run an obvious-to-me assertion test on code which I haven’t carefully inspected, to see if the result of the test looks sane.
This is reasonable but I think my response to your comment will mainly involve re-stating what I wrote in the post, so maybe it’ll be easier to point to the relevant sections: 3.1. for what DSM mandates when the agent has beliefs about its decision tree, 3.2.2 for what DSM mandates when the agent hadn’t considered an actualised continuation of its decision tree, and 3.3. for discussion of these results. In particular, the following paragraphs are meant to illustrate what DSM mandates in the least favourable epistemic state that the agent could be in (unawareness with new options appearing):
It seems we can’t guarantee non-trammelling in general and between all prospects. But we don’t need to guarantee this for all prospects to guarantee it for some, even under awareness growth. Indeed, as we’ve now shown, there are always prospects with respect to which the agent never gets trammelled, no matter how many choices it faces. In fact, whenever the tree expansion does not bring about new prospects, trammelling will never occur (Proposition 7). And even when it does, trammelling is bounded above by the number of comparability classes (Proposition 10).
And it’s intuitive why this would be: we’re simply picking out the best prospects in each class. For instance, suppose prospects were representable as pairs ⟨s,c⟩ that are comparable iff the s-values are the same, and then preferred to the extent that c is large. Then here’s the process: for each value of s, identify the options that maximise c. Put all of these in a set. Then choice between any options in that set will always remain arbitrary; never trammelled.
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
But it sounds like the agent’s initial choice between A and B is forced, yes? (Otherwise, it wouldn’t be the case that the agent is permitted to end up with either A+ or B, but not A.) So the presence of A+ within a particular continuation of the decision tree influences the agent’s choice at the initial node, in a way that causes it to reliably choose one incomparable option over another.
Further thoughts: under the original framing, instead of choosing between A and B (while knowing that B can later be traded for A+), the agent instead chooses whether to go “up” or “down” to receive (respectively) A, or a further choice between A+ and B. It occurs to me that you might be using this representation to argue for a qualitative difference in the behavior produced, but if so, I’m not sure how much I buy into it.
For concreteness, suppose the agent starts out with A, and notices a series of trades which first involves trading A for B, and then B for A+. It seems to me that if I frame the problem like this, the structure of the resulting tree should be isomorphic to that of the decision problem I described, but not necessarily the “up”/”down” version—at least, not if you consider that version to play a key role in DSM’s recommendation.
(In particular, my frame is sensitive to which state the agent is initialized in: if it is given B to start, then it has no particular incentive to want to trade that for either A or A+, and so faces no incentive to trade at all. If you initialize the agent with A or B at random, and institute the rule that it doesn’t trade by default, then the agent will end up with A+ when initialized with A, and B when initialized with B—which feels a little similar to what you said about DSM allowing both A+ and B as permissible options.)
It sounds like you want to make it so that the agent’s initial state isn’t taken into account—in fact, it sounds like you want to assign values only to terminal nodes in the tree, take the subset of those terminal nodes which have maximal utility within a particular incomparability class, and choose arbitrarily among those. My frame, then, would be equivalent to using the agent’s initial state as a tiebreaker: whichever terminal node shares an incomparability class with the agent’s initial state will be the one the agent chooses to steer towards.
...in which case, assuming I got the above correct, I think I stand by my initial claim that this will lead to behavior which, while not necessarily “trammeling” by your definition, is definitely consequentialist in the worrying sense: an agent initialized in the “shutdown button not pressed” state will perform whatever intermediate steps are needed to navigate to the maximal-utility “shutdown button not pressed” state it can foresee, including actions which prevent the shutdown button from being pressed.
In your example, DSM permits the agent to end up with either A+ or B. Neither is strictly dominated, and neither has become mandatory for the agent to choose over the other. The agent won’t have reason to push probability mass from one towards the other.
This is reasonable but I think my response to your comment will mainly involve re-stating what I wrote in the post, so maybe it’ll be easier to point to the relevant sections: 3.1. for what DSM mandates when the agent has beliefs about its decision tree, 3.2.2 for what DSM mandates when the agent hadn’t considered an actualised continuation of its decision tree, and 3.3. for discussion of these results. In particular, the following paragraphs are meant to illustrate what DSM mandates in the least favourable epistemic state that the agent could be in (unawareness with new options appearing):
But it sounds like the agent’s initial choice between A and B is forced, yes? (Otherwise, it wouldn’t be the case that the agent is permitted to end up with either A+ or B, but not A.) So the presence of A+ within a particular continuation of the decision tree influences the agent’s choice at the initial node, in a way that causes it to reliably choose one incomparable option over another.
Further thoughts: under the original framing, instead of choosing between A and B (while knowing that B can later be traded for A+), the agent instead chooses whether to go “up” or “down” to receive (respectively) A, or a further choice between A+ and B. It occurs to me that you might be using this representation to argue for a qualitative difference in the behavior produced, but if so, I’m not sure how much I buy into it.
For concreteness, suppose the agent starts out with A, and notices a series of trades which first involves trading A for B, and then B for A+. It seems to me that if I frame the problem like this, the structure of the resulting tree should be isomorphic to that of the decision problem I described, but not necessarily the “up”/”down” version—at least, not if you consider that version to play a key role in DSM’s recommendation.
(In particular, my frame is sensitive to which state the agent is initialized in: if it is given B to start, then it has no particular incentive to want to trade that for either A or A+, and so faces no incentive to trade at all. If you initialize the agent with A or B at random, and institute the rule that it doesn’t trade by default, then the agent will end up with A+ when initialized with A, and B when initialized with B—which feels a little similar to what you said about DSM allowing both A+ and B as permissible options.)
It sounds like you want to make it so that the agent’s initial state isn’t taken into account—in fact, it sounds like you want to assign values only to terminal nodes in the tree, take the subset of those terminal nodes which have maximal utility within a particular incomparability class, and choose arbitrarily among those. My frame, then, would be equivalent to using the agent’s initial state as a tiebreaker: whichever terminal node shares an incomparability class with the agent’s initial state will be the one the agent chooses to steer towards.
...in which case, assuming I got the above correct, I think I stand by my initial claim that this will lead to behavior which, while not necessarily “trammeling” by your definition, is definitely consequentialist in the worrying sense: an agent initialized in the “shutdown button not pressed” state will perform whatever intermediate steps are needed to navigate to the maximal-utility “shutdown button not pressed” state it can foresee, including actions which prevent the shutdown button from being pressed.